40 research outputs found

    Modified Cuttlefish Swarm Optimization with Machine Learning-Based Sustainable Application of Solid Waste Management in IoT

    No full text
    The internet of things (IoT) paradigm roles an important play in enhancing smart city tracking applications and managing city procedures in real time. The most important problem connected to smart city applications has been solid waste management, which can have adverse effects on society’s health and environment. Waste management has developed a challenge faced by not only evolving nations but also established and developed counties. Solid waste management is an important and stimulating problem for environments across the entire world. Therefore, there is the need to develop an effective technique that will remove these problems, or at least decreases them to a minimal level. This study develops a modified cuttlefish swarm optimization with machine learning-based solid waste management (MCSOML-SWM) in smart cities. The MCSOML-SWM technique aims to recognize different categories of solid wastes and enable smart waste management. In the MCSOML-SWM model, a single shot detector (SSD) model allows effectual recognition of objects. Then, a deep convolutional neural network-based MixNet model was exploited to produce feature vectors. Since trial-and-error hyperparameter tuning is a tedious process, the MCSO algorithm was applied for automated hyperparameter tuning. For accurate waste classification, the MCSOML-SWM technique applies support vector machine (SVM) in this study. A comprehensive set of simulations demonstrate the improved classification performance of the MCSOML-SWM model with maximum accuracy of 99.34%

    Concept-based and fuzzy adaptive e-learning

    No full text
    This study aims to test an effective adaptive e-learning system that uses a coloured concept map to show the learner's knowledge level for each concept in the topic. A fuzzy logic system is used to evaluate the learner's knowledge level for each concept in the domain and produce a ranked concept list of learning materials to address weaknesses in the learner’s understanding. This system will obtain information on a learner's understanding of concepts by an initial pre-test before the system is used for learning and a post-test after using the learning system. A fuzzy logic system is used to produce a weighted concept map during the learning process. The aim of this research is to prove that such a proposed novel adapted e-learning system will enhance a learner's performance and understanding. In addition, this research aims to increase participants' overall learning level and effectiveness by providing a coloured concept map of understanding followed by a ranked concepts list of learning materials

    Joint optimization of UAV-IRS placement and resource allocation for wireless powered mobile edge computing networks

    No full text
    The rapid evolution of communication systems towards the next generation has led to an increased deployment of Internet of Things (IoT) devices for various real-time applications. However, these devices often face limitations in terms of processing power and battery life, which can hinder overall system performance. Additionally, applications such as augmented reality and surveillance require intensive computations within tight timeframes. This research focuses on investigating a mobile edge computing (MEC) network empowered by unmanned aerial vehicle intelligent reflecting surfaces (UAV-IRS) to enhance the computational energy efficiency of the system through optimized resource allocation. The MEC infrastructure incorporates the energy transfer circuit (ETC) and edge server (ES), co-located with the intelligent access point (AP). To eliminate interference between energy transfer and data transmission, a time-division multiple access method is utilized. In the first phase, the ETC wirelessly transfers power to low-power IoT devices, which efficiently harvest and store the received energy in their batteries. In the second phase, IoT devices employ the stored energy for local computing or offloading tasks. Furthermore, the presence of tall buildings may obstruct communication routes, impacting system functionality. To address these challenges, we propose an optimization framework that simultaneously considers time, power, phase shift design, and local computational resources. This joint optimization problem is non-convex and non-linear, making it NP-hard. To tackle this complexity, we decompose the problem into subproblems and solve them iteratively using a convex optimization toolbox like CVX. Through simulations, we demonstrate that our proposed optimization framework significantly improves 40.7% system performance compared to alternative approaches

    Joint optimization of UAV-IRS placement and resource allocation for wireless powered mobile edge computing networks

    No full text
    The rapid evolution of communication systems towards the next generation has led to an increased deployment of Internet of Things (IoT) devices for various real-time applications. However, these devices often face limitations in terms of processing power and battery life, which can hinder overall system performance. Additionally, applications such as augmented reality and surveillance require intensive computations within tight timeframes. This research focuses on investigating a mobile edge computing (MEC) network empowered by unmanned aerial vehicle intelligent reflecting surfaces (UAV-IRS) to enhance the computational energy efficiency of the system through optimized resource allocation. The MEC infrastructure incorporates the energy transfer circuit (ETC) and edge server (ES), co-located with the intelligent access point (AP). To eliminate interference between energy transfer and data transmission, a time-division multiple access method is utilized. In the first phase, the ETC wirelessly transfers power to low-power IoT devices, which efficiently harvest and store the received energy in their batteries. In the second phase, IoT devices employ the stored energy for local computing or offloading tasks. Furthermore, the presence of tall buildings may obstruct communication routes, impacting system functionality. To address these challenges, we propose an optimization framework that simultaneously considers time, power, phase shift design, and local computational resources. This joint optimization problem is non-convex and non-linear, making it NP-hard. To tackle this complexity, we decompose the problem into subproblems and solve them iteratively using a convex optimization toolbox like CVX. Through simulations, we demonstrate that our proposed optimization framework significantly improves 40.7% system performance compared to alternative approaches

    Enhanced Chimp Optimization-Based Feature Selection with Fuzzy Logic-Based Intrusion Detection System in Cloud Environment

    No full text
    Cloud computing (CC) refers to an Internet-based computing technology in which shared resources, such as storage, software, information, and platform, are offered to users on demand. CC is a technology through which virtualized and dynamically scalable resources are presented to users on the Internet. Security is highly significant in this on-demand CC. Therefore, this paper presents improved metaheuristics with a fuzzy logic-based intrusion detection system for the cloud security (IMFL-IDSCS) technique. The IMFL-IDSCS technique can identify intrusions in the distributed CC platform and secure it from probable threats. An individual sample of IDS is deployed for every client, and it utilizes an individual controller for data management. In addition, the IMFL-IDSCS technique uses an enhanced chimp optimization algorithm-based feature selection (ECOA-FS) method for choosing optimal features, followed by an adaptive neuro-fuzzy inference system (ANFIS) model enforced to recognize intrusions. Finally, the hybrid jaya shark smell optimization (JSSO) algorithm is used to optimize the membership functions (MFs). A widespread simulation analysis is performed to examine the enhanced outcomes of the IMFL-IDSCS technique. The extensive comparison study reported the enhanced outcomes of the IMFL-IDSCS model with maximum detection efficiency with accuracy of 99.31%, precision of 92.03%, recall of 78.25%, and F-score of 81.80%

    Intelligence based Accurate Medium and Long Term Load Forecasting System

    No full text
    In this study, we aim to provide an efficient load prediction system projected for different local feeders to predict the Medium- and Long-term Load Forecasting. This model improves future requirements for expansions, equipment retailing or staff recruiting to the electric utility company. We aimed to improve ahead forecasting by using hybrid approach and optimizing the parameters of our models. We used Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), Multilayer perceptron (MLP) and hybrid methods. We used Root Mean Square Error (RMSE), Mean Absolute Percentage Error (MAPE), Mean Absolute Error (MAE) and squared error for comparison. To predict the 3 months ahead load forecasting, the lowermost prediction error was acquired using LSTM MAPE (2.70). For 6 months ahead forecasting prediction, MLP gives highest predictions with MAPE (2.36). Moreover, to predict the 9 months ahead load forecasting, the highest prediction has been attained using LSTM in terms of MAPE (2.37). Likewise, ahead 1 years MAPE (2.25) was yielded using LSTM and ahead six years MAPE (2.49) was provided using MLP. The proposed methods attain stable and better performance for prediction of load forecasting. The finding indicates that this model can be better instigated for future expansion requirements

    Explainable Artificial Intelligence Enabled TeleOphthalmology for Diabetic Retinopathy Grading and Classification

    No full text
    Recently, Telehealth connects patients to vital healthcare services via remote monitoring, wireless communications, videoconferencing, and electronic consults. By increasing access to specialists and physicians, telehealth assists in ensuring patients receive the proper care at the right time and right place. Teleophthalmology is a study of telemedicine that provides services for eye care using digital medical equipment and telecommunication technologies. Multimedia computing with Explainable Artificial Intelligence (XAI) for telehealth has the potential to revolutionize various aspects of our society, but several technical challenges should be resolved before this potential can be realized. Advances in artificial intelligence methods and tools reduce waste and wait times, provide service efficiency and better insights, and increase speed, the level of accuracy, and productivity in medicine and telehealth. Therefore, this study develops an XAI-enabled teleophthalmology for diabetic retinopathy grading and classification (XAITO-DRGC) model. The proposed XAITO-DRGC model utilizes OphthoAI IoMT headsets to enable remote monitoring of diabetic retinopathy (DR) disease. To accomplish this, the XAITO-DRGC model applies median filtering (MF) and contrast enhancement as a pre-processing step. In addition, the XAITO-DRGC model applies U-Net-based image segmentation and SqueezeNet-based feature extractor. Moreover, Archimedes optimization algorithm (AOA) with a bidirectional gated recurrent convolutional unit (BGRCU) is exploited for DR detection and classification. The experimental validation of the XAITO-DRGC method can be tested using a benchmark dataset and the outcomes are assessed under distinct prospects. Extensive comparison studies stated the enhancements of the XAITO-DRGC model over recent approaches
    corecore