44 research outputs found

    Robust sign language detection for hearing disabled persons by Improved Coyote Optimization Algorithm with deep learning

    Get PDF
    Sign language (SL) recognition for individuals with hearing disabilities involves leveraging machine learning (ML) and computer vision (CV) approaches for interpreting and understanding SL gestures. By employing cameras and deep learning (DL) approaches, namely convolutional neural networks (CNN) and recurrent neural networks (RNN), these models analyze facial expressions, hand movements, and body gestures connected with SL. The major challenges in SL recognition comprise the diversity of signs, differences in signing styles, and the need to recognize the context in which signs are utilized. Therefore, this manuscript develops an SL detection by Improved Coyote Optimization Algorithm with DL (SLR-ICOADL) technique for hearing disabled persons. The goal of the SLR-ICOADL technique is to accomplish an accurate detection model that enables communication for persons using SL as a primary case of expression. At the initial stage, the SLR-ICOADL technique applies a bilateral filtering (BF) approach for noise elimination. Following this, the SLR-ICOADL technique uses the Inception-ResNetv2 for feature extraction. Meanwhile, the ICOA is utilized to select the optimal hyperparameter values of the DL model. At last, the extreme learning machine (ELM) classification model can be utilized for the recognition of various kinds of signs. To exhibit the better performance of the SLR-ICOADL approach, a detailed set of experiments are performed. The experimental outcome emphasizes that the SLR-ICOADL technique gains promising performance in the SL detection process

    Artificial Rabbit Optimizer with deep learning for fall detection of disabled people in the IoT Environment

    Get PDF
    Fall detection (FD) for disabled persons in the Internet of Things (IoT) platform contains a combination of sensor technologies and data analytics for automatically identifying and responding to samples of falls. In this regard, IoT devices like wearable sensors or ambient sensors from the personal space role a vital play in always monitoring the user's movements. FD employs deep learning (DL) in an IoT platform using sensors, namely accelerometers or depth cameras, to capture data connected to human movements. DL approaches are frequently recurrent neural networks (RNNs) or convolutional neural networks (CNNs) that have been trained on various databases for recognizing patterns connected with falls. The trained methods are then executed on edge devices or cloud environments for real-time investigation of incoming sensor data. This method differentiates normal activities and potential falls, triggering alerts and reports to caregivers or emergency numbers once a fall is identified. We designed an Artificial Rabbit Optimizer with a DL-based FD and classification (ARODL-FDC) system from the IoT environment. The ARODL-FDC approach proposes to detect and categorize fall events to assist elderly people and disabled people. The ARODL-FDC technique comprises a four-stage process. Initially, the preprocessing of input data is performed by Gaussian filtering (GF). The ARODL-FDC technique applies the residual network (ResNet) model for feature extraction purposes. Besides, the ARO algorithm has been utilized for better hyperparameter choice of the ResNet algorithm. At the final stage, the full Elman Neural Network (FENN) model has been utilized for the classification and recognition of fall events. The experimental results of the ARODL-FDC technique can be tested on the fall dataset. The simulation results inferred that the ARODL-FDC technique reaches promising performance over compared models concerning various measures

    Critical success factors for ERP systemsโ€™ post-implementations of SMEs in Saudi Arabia: a top management and vendorsโ€™ perspective

    Get PDF
    Although numerous case studies have determined the critical success factors (CSFs) for enterprise resource planning (ERP) during the adoption and implementation stages, empirical investigations of CSFs for ERP in post-implementation stages (after going live) are in scarcity. As such, this study examined the influence of top management support and vendor support as CSFs on the post-implementation stage of ERP systems in small and medium enterprises (SMEs) established in the Kingdom of Saudi Arabia (KSA). A total of 177 end-users of ERP systems from two manufacturing organizations in KSA that had implemented on-premises ERP systems were involved in this study. Data gathered from structured questionnaires were analyzed using SmartPLS3 and SPSS software programs. The regression analysis was performed to assess the correlations among the variables. Out of seven CSFs identified from the literature, the impact of top management support was significant on user training, competency of internal Information Technology (IT) department, and effective communication between departments, but insignificant on continuous vendor support. Meanwhile, continuous vendor support had a significant influence on continuous integration of the system, but was insignificant on user interfaces and custom code. The study outcomes may serve as practical guidance for effective post-implementation in ERP systems. Referring to the proposed research model, ERP post-implementation success in KSA was significantly influenced by top management support, whereas continuous vendor support displayed a substantial impact on the continuous integration of ERP systems

    Artificial-Intelligence-Based Decision Making for Oral Potentially Malignant Disorder Diagnosis in Internet of Medical Things Environment

    No full text
    Oral cancer is considered one of the most common cancer types in several counties. Earlier-stage identification is essential for better prognosis, treatment, and survival. To enhance precision medicine, Internet of Medical Things (IoMT) and deep learning (DL) models can be developed for automated oral cancer classification to improve detection rate and decrease cancer-specific mortality. This article focuses on the design of an optimal Inception-Deep Convolution Neural Network for Oral Potentially Malignant Disorder Detection (OIDCNN-OPMDD) technique in the IoMT environment. The presented OIDCNN-OPMDD technique mainly concentrates on identifying and classifying oral cancer by using an IoMT device-based data collection process. In this study, the feature extraction and classification process are performed using the IDCNN model, which integrates the Inception module with DCNN. To enhance the classification performance of the IDCNN model, the moth flame optimization (MFO) technique can be employed. The experimental results of the OIDCNN-OPMDD technique are investigated, and the results are inspected under specific measures. The experimental outcome pointed out the enhanced performance of the OIDCNN-OPMDD model over other DL models

    Wavelet Mutation with Aquila Optimization-Based Routing Protocol for Energy-Aware Wireless Communication

    No full text
    Wireless sensor networks (WSNs) have been developed recently to support several applications, including environmental monitoring, traffic control, smart battlefield, home automation, etc. WSNs include numerous sensors that can be dispersed around a specific node to achieve the computing process. In WSNs, routing becomes a very significant task that should be managed prudently. The main purpose of a routing algorithm is to send data between sensor nodes (SNs) and base stations (BS) to accomplish communication. A good routing protocol should be adaptive and scalable to the variations in network topologies. Therefore, a scalable protocol has to execute well when the workload increases or the network grows larger. Many complexities in routing involve security, energy consumption, scalability, connectivity, node deployment, and coverage. This article introduces a wavelet mutation with Aquila optimization-based routing (WMAO-EAR) protocol for wireless communication. The presented WMAO-EAR technique aims to accomplish an energy-aware routing process in WSNs. To do this, the WMAO-EAR technique initially derives the WMAO algorithm for the integration of wavelet mutation with the Aquila optimization (AO) algorithm. A fitness function is derived using distinct constraints, such as delay, energy, distance, and security. By setting a mutation probability P, every individual next to the exploitation and exploration phase process has the probability of mutation using the wavelet mutation process. For demonstrating the enhanced performance of the WMAO-EAR technique, a comprehensive simulation analysis is made. The experimental outcomes establish the betterment of the WMAO-EAR method over other recent approaches

    Modified Equilibrium Optimization Algorithm With Deep Learning-Based DDoS Attack Classification in 5G Networks

    No full text
    5G networks offer high-speed, low-latency communication for various applications. As 5G networks introduce new capabilities and support a wide range of services, they also become more vulnerable to different kinds of cyberattacks, particularly Distributed Denial of Service (DDoS) attacks. Effective DDoS attack classification in 5G networks is a critical aspect of ensuring the security, availability, and performance of these advanced communication infrastructures. In recent days, machine learning (ML) and deep learning (DL) models can be employed for an accurate DDoS attack detection process. In this aspect, this study designs a Modified Equilibrium Optimization Algorithm with Deep Learning based DDoS Attack Classification (MEOADL-ADC) method in 5G networks. The goal of the MEOADL-ADC technique is the automated classification of DDoS attacks in the 5G network. The MEOADL-ADC technique follows a three-stage process such as feature selection, classification, and hyperparameter tuning. Primarily, the MEOADL-ADC technique employs MEOA based feature selection approach. Next, the MEOADL-ADC technique utilizes the long short-term memory (LSTM) model for the classification of DDoS attacks. Finally, the tunicate swarm algorithm (TSA) is exploited to adjust the hyperparameter of the LSTM model. The design of MEOA-based feature selection and TSA-based hyperparameter tuning shows the novelty of the work. The experimental outcome of the MEOADL-ADC method is tested on a benchmark dataset, and the outcomes indicate the betterment of the MEOADL-ADC algorithm over the current methods with maximum accuracy of 97.60%

    Chaotic Mapping Lion Optimization Algorithm-Based Node Localization Approach for Wireless Sensor Networks

    No full text
    Wireless Sensor Networks (WSNs) contain several small, autonomous sensor nodes (SNs) able to process, transfer, and wirelessly sense data. These networks find applications in various domains like environmental monitoring, industrial automation, healthcare, and surveillance. Node Localization (NL) is a major problem in WSNs, aiming to define the geographical positions of sensors correctly. Accurate localization is essential for distinct WSN applications comprising target tracking, environmental monitoring, and data routing. Therefore, this paper develops a Chaotic Mapping Lion Optimization Algorithm-based Node Localization Approach (CMLOA-NLA) for WSNs. The purpose of the CMLOA-NLA algorithm is to define the localization of unknown nodes based on the anchor nodes (ANs) as a reference point. In addition, the CMLOA is mainly derived from the combination of the tent chaotic mapping concept into the standard LOA, which tends to improve the convergence speed and precision of NL. With extensive simulations and comparison results with recent localization approaches, the effectual performance of the CMLOA-NLA technique is illustrated. The experimental outcomes demonstrate considerable improvement in terms of accuracy as well as efficiency. Furthermore, the CMLOA-NLA technique was demonstrated to be highly robust against localization error and transmission range with a minimum average localization error of 2.09%. Keywords: anchor nodes; metaheuristic optimization algorithm; node localization; tent chaotic mapping; wireless sensor network

    Modeling of Botnet Detection Using Barnacles Mating Optimizer with Machine Learning Model for Internet of Things Environment

    No full text
    Owing to the development and expansion of energy-aware sensing devices and autonomous and intelligent systems, the Internet of Things (IoT) has gained remarkable growth and found uses in several day-to-day applications. However, IoT devices are highly prone to botnet attacks. To mitigate this threat, a lightweight and anomaly-based detection mechanism that can create profiles for malicious and normal actions on IoT networks could be developed. Additionally, the massive volume of data generated by IoT gadgets could be analyzed by machine learning (ML) methods. Recently, several deep learning (DL)-related mechanisms have been modeled to detect attacks on the IoT. This article designs a botnet detection model using the barnacles mating optimizer with machine learning (BND-BMOML) for the IoT environment. The presented BND-BMOML model focuses on the identification and recognition of botnets in the IoT environment. To accomplish this, the BND-BMOML model initially follows a data standardization approach. In the presented BND-BMOML model, the BMO algorithm is employed to select a useful set of features. For botnet detection, the BND-BMOML model in this study employs an Elman neural network (ENN) model. Finally, the presented BND-BMOML model uses a chicken swarm optimization (CSO) algorithm for the parameter tuning process, demonstrating the novelty of the work. The BND-BMOML method was experimentally validated using a benchmark dataset and the outcomes indicated significant improvements in performance over existing methods

    Planet Optimization with Deep Convolutional Neural Network for Lightweight Intrusion Detection in Resource-Constrained IoT Networks

    No full text
    Cyber security is becoming a challenging issue, because of the growth of the Internet of Things (IoT), in which an immense quantity of tiny smart gadgets push trillions of bytes of data over the Internet. Such gadgets have several security flaws, due to a lack of hardware security support and defense mechanisms, thus, making them prone to cyber-attacks. Moreover, IoT gateways present limited security features for identifying such threats, particularly the absence of intrusion detection techniques powered by deep learning (DL). Certainly, DL methods need higher computational power that exceeds the capability of such gateways. This article focuses on the development of Planet Optimization with a deep convolutional neural network for lightweight intrusion detection (PODCNN-LWID) in a resource-constrained IoT environment. The presented PODCNN-LWID technique primarily aims to identify and categorize intrusions. In the presented PODCNN-LWID model, two major processes are involved, namely, classification and parameter tuning. At the primary stage, the PODCNN-LWID technique applies a DCNN model for the intrusion identification process. Next, in the second stage, the PODCNN-LWID model utilizes the PO algorithm as a hyperparameter tuning process. The experimental validation of the PODCNN-LWID model is carried out on a benchmark dataset, and the results are assessed using varying measures. The comparison study reports the enhancements of the PODCNN-LWID model over other approaches

    Water wave optimization with deep learning driven smart grid stability prediction

    No full text
    Smart Grid (SG) technologies enable the acquisition of huge volumes of high dimension and multi-class data related to electric power grid operations through the integration of advanced metering infrastructures, control systems, and communication technologies. In SGs, user demand data is gathered and examined over the present supply criteria whereas the expenses are then informed to the clients so that they can decide about electricity consumption. Since the entire procedure is valued on the basis of time, it is essential to perform adaptive estimation of the SGโ€™s stability. Recent advancements in Machine Learning (ML) and Deep Learning (DL) models enable the designing of effective stability prediction models in SGs. In this background, the current study introduces a novel Water Wave Optimization with Optimal Deep Learning Driven Smart Grid Stability Prediction (WWOODL-SGSP) model. The aim of the presented WWOODL-SGSP model is to predict the stability level of SGs in a proficient manner. To attain this, the proposed WWOODL-SGSP model initially applies normalization process to scale the data to a uniform level. Then, WWO algorithm is applied to choose an optimal subset of features from the pre-processed data. Next, Deep Belief Network (DBN) model is followed to predict the stability level of SGs. Finally, Slime Mold Algorithm (SMA) is exploited to fine tune the hyperparameters involved in DBN model. In order to validate the enhanced performance of the proposed WWOODL-SGSP model, a wide range of experimental analyse
    corecore