101 research outputs found

    Comparative Study on Ant Colony Optimization (ACO) and K-Means Clustering Approaches for Jobs Scheduling and Energy Optimization Model in Internet of Things (IoT)

    Get PDF
    The concept of Internet of Things (IoT) was proposed by Professor Kevin Ashton of the Massachusetts Institute of Technology (MIT) in 1999. IoT is an environment that people understand in many different ways depending on their requirement, point of view and purpose. When transmitting data in IoT environment, distribution of network traffic fluctuates frequently. If links of the network or nodes fail randomly, then automatically new nodes get added frequently. Heavy network traffic affects the response time of all system and it consumes more energy continuously. Minimization the network traffic/ by finding the shortest path from source to destination minimizes the response time of all system and also reduces the energy consumption cost. The ant colony optimization (ACO) and K-Means clustering algorithms characteristics conform to the auto-activator and optimistic response mechanism of the shortest route searching from source to destination. In this article, ACO and K-Means clustering algorithms are studied to search the shortest route path from source to destination by optimizing the Quality of Service (QoS) constraints. Resources are assumed in the active and varied IoT network atmosphere for these two algorithms. This work includes the study and comparison between ant colony optimization (ACO) and K-Means algorithms to plan a response time aware scheduling model for IoT. It is proposed to divide the IoT environment into various areas and a various number of clusters depending on the types of networks. It is noticed that this model is more efficient for the suggested routing algorithm in terms of response time, point-to-point delay, throughput and overhead of control bits

    Artificial Intelligence Models for Scheduling Big Data Services on the Cloud

    Get PDF
    The widespread adoption of Internet of Things (IoT) applications in many critical sectors (e.g., healthcare, unmanned autonomous systems, etc.) and the huge volumes of data that are being generated from such applications have led to an unprecedented reliance on the cloud computing platform to store and process these data. Moreover, cloud providers tend to receive massive waves of demands on their storage and computing resources. To help providers deal with such demands without sacrificing performance, the concept of cloud automation had recently arisen to improve the performance and reduce the manual efforts related to the management of cloud computing workloads. However, several challenges have to be taken into consideration in order to guarantee an optimal performance for big data storage and analytics in cloud computing environments. In this context, we propose in this thesis a smart scheduling model as an automated big data task scheduling approach in cloud computing environments. Our scheduling model combines Deep Reinforcement Learning (DRL), Federated Learning (FL), and Transfer Learning (TL) to automatically predict the IoT devices to which each incoming big data task should be scheduled to as to improve the performance and reduce the execution cost. Furthermore, we solve the long execution time and data shortage problems by introducing a FL-based solution that also ensures privacy-preserving and reduces training and data complexity. The motivation of this thesis stems from four main observations/research gaps that we have drawn through our literature reviews and/or experiments, which are: (1) most of the existing cloud-based scheduling solutions consider the scheduling problem only from the tasks priority viewpoint, which leads to increase the amounts of wasted resources in case of malicious or compromised IoT devices; (2) the existing scheduling solutions in the domain of cloud and edge computing are still ineffective in making real-time decisions concerning the resource allocation and management in cloud systems; (3) it is quite difficult to schedule tasks or learning models from servers in areas that are far from the objects and IoT devices, which entails significant delay and response time for the process of transmitting data; and (4) none of the existing scheduling solutions has yet addressed the issue of dynamic task scheduling automation in complex and large-scale edge computing settings. In this thesis, we address the scheduling challenges related to the cloud and edge computing environment. To this end, we argue that trust should be an integral part of the decision-making process and therefore design a trust establishment mechanism between the edge server and IoT devices. The trust mechanism model aims to detect those IoT devices that over-utilize or under-utilize their resources. Thereafter, we design a smart scheduling algorithm to automate the process of scheduling large-scale workloads onto edge cloud computing resources while taking into account the trust scores, task waiting time, and energy levels of the IoT devices to make appropriate scheduling decisions. Finally, we apply our scheduling strategy in the healthcare domain to investigate its applicability in a real-world scenario (COVID-19)

    Route Discovery Development for Multiple Destination Using Artificial Ant Colony

    Get PDF
    Smart cities need a smart applications for the citizen, not just digital devices. Smart applications will provide a decision-making to users by using artificial intelligence. Many real-world services for online shopping and delivery systems were used and attracted customers, especially after the Covid-19 pandemics when people prefer to keep social distance and minimize social places visiting. These services need to discover the shortest path for the delivery driver to visit multiple destinations and serve the customers. The aim of this research is to develop the route discovery for multiple-destination by using ACO Algorithm for Multiple destination route planning. ACO Algorithm for Multiple destination route planning develops the Google MAP application to optimize the route when it is used for multiple destinations and when the route is updated with a new destination. The results show improvement in the multiple destination route discovery when the shortest path and the sequence order of cities are found. In conclusion, the ACO Algorithm for Multiple destination route planning simulation results could be used with the Google Map application and provide an artificial decision for the citizen of Erbil city.  Finally, we discuss our vision for future development

    A review on job scheduling technique in cloud computing and priority rule based intelligent framework

    Get PDF
    In recent years, the concept of cloud computing has been gaining traction to provide dynamically increasing access to shared computing resources (software and hardware) via the internet. It’s not secret that cloud computing’s ability to supply mission-critical services has made job scheduling a hot subject in the industry right now. Cloud resources may be wasted, or in-service performance may suffer because of under-utilization or over-utilization, respectively, due to poor scheduling. Various strategies from the literature are examined in this research in order to give procedures for the planning and performance of Job Scheduling techniques (JST) in cloud computing. To begin, we look at and tabulate the existing JST that is linked to cloud and grid computing. The present successes are then thoroughly reviewed, difficulties and flows are recognized, and intelligent solutions are devised to take advantage of the proposed taxonomy. To bridge the gaps between present investigations, this paper also seeks to provide readers with a conceptual framework, where we proposed an effective job scheduling technique in cloud computing. These findings are intended to provide academics and policymakers with information about the advantages of a more efficient cloud computing setup. In cloud computing, fair job scheduling is most important. We proposed a priority-based scheduling technique to ensure fair job scheduling. Finally, the open research questions raised in this article will create a path for the implementation of an effective job scheduling strateg

    Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment

    Get PDF
    Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022

    IoT in smart communities, technologies and applications.

    Get PDF
    Internet of Things is a system that integrates different devices and technologies, removing the necessity of human intervention. This enables the capacity of having smart (or smarter) cities around the world. By hosting different technologies and allowing interactions between them, the internet of things has spearheaded the development of smart city systems for sustainable living, increased comfort and productivity for citizens. The Internet of Things (IoT) for Smart Cities has many different domains and draws upon various underlying systems for its operation, in this work, we provide a holistic coverage of the Internet of Things in Smart Cities by discussing the fundamental components that make up the IoT Smart City landscape, the technologies that enable these domains to exist, the most prevalent practices and techniques which are used in these domains as well as the challenges that deployment of IoT systems for smart cities encounter and which need to be addressed for ubiquitous use of smart city applications. It also presents a coverage of optimization methods and applications from a smart city perspective enabled by the Internet of Things. Towards this end, a mapping is provided for the most encountered applications of computational optimization within IoT smart cities for five popular optimization methods, ant colony optimization, genetic algorithm, particle swarm optimization, artificial bee colony optimization and differential evolution. For each application identified, the algorithms used, objectives considered, the nature of the formulation and constraints taken in to account have been specified and discussed. Lastly, the data setup used by each covered work is also mentioned and directions for future work have been identified. Within the smart health domain of IoT smart cities, human activity recognition has been a key study topic in the development of cyber physical systems and assisted living applications. In particular, inertial sensor based systems have become increasingly popular because they do not restrict users’ movement and are also relatively simple to implement compared to other approaches. Fall detection is one of the most important tasks in human activity recognition. With an increasingly aging world population and an inclination by the elderly to live alone, the need to incorporate dependable fall detection schemes in smart devices such as phones, watches has gained momentum. Therefore, differentiating between falls and activities of daily living (ADLs) has been the focus of researchers in recent years with very good results. However, one aspect within fall detection that has not been investigated much is direction and severity aware fall detection. Since a fall detection system aims to detect falls in people and notify medical personnel, it could be of added value to health professionals tending to a patient suffering from a fall to know the nature of the accident. In this regard, as a case study for smart health, four different experiments have been conducted for the task of fall detection with direction and severity consideration on two publicly available datasets. These four experiments not only tackle the problem on an increasingly complicated level (the first one considers a fall only scenario and the other two a combined activity of daily living and fall scenario) but also present methodologies which outperform the state of the art techniques as discussed. Lastly, future recommendations have also been provided for researchers

    Bio-inspired computation for big data fusion, storage, processing, learning and visualization: state of the art and future directions

    Get PDF
    This overview gravitates on research achievements that have recently emerged from the confluence between Big Data technologies and bio-inspired computation. A manifold of reasons can be identified for the profitable synergy between these two paradigms, all rooted on the adaptability, intelligence and robustness that biologically inspired principles can provide to technologies aimed to manage, retrieve, fuse and process Big Data efficiently. We delve into this research field by first analyzing in depth the existing literature, with a focus on advances reported in the last few years. This prior literature analysis is complemented by an identification of the new trends and open challenges in Big Data that remain unsolved to date, and that can be effectively addressed by bio-inspired algorithms. As a second contribution, this work elaborates on how bio-inspired algorithms need to be adapted for their use in a Big Data context, in which data fusion becomes crucial as a previous step to allow processing and mining several and potentially heterogeneous data sources. This analysis allows exploring and comparing the scope and efficiency of existing approaches across different problems and domains, with the purpose of identifying new potential applications and research niches. Finally, this survey highlights open issues that remain unsolved to date in this research avenue, alongside a prescription of recommendations for future research.This work has received funding support from the Basque Government (Eusko Jaurlaritza) through the Consolidated Research Group MATHMODE (IT1294-19), EMAITEK and ELK ARTEK programs. D. Camacho also acknowledges support from the Spanish Ministry of Science and Education under PID2020-117263GB-100 grant (FightDIS), the Comunidad Autonoma de Madrid under S2018/TCS-4566 grant (CYNAMON), and the CHIST ERA 2017 BDSI PACMEL Project (PCI2019-103623, Spain)

    Security of Big Data over IoT Environment by Integration of Deep Learning and Optimization

    Get PDF
    This is especially true given the spread of IoT, which makes it possible for two-way communication between various electronic devices and is therefore essential to contemporary living. However, it has been shown that IoT may be readily exploited. There is a need to develop new technology or combine existing ones to address these security issues. DL, a kind of ML, has been used in earlier studies to discover security breaches with good results. IoT device data is abundant, diverse, and trustworthy. Thus, improved performance and data management are attainable with help of big data technology. The current state of IoT security, big data, and deep learning led to an all-encompassing study of the topic. This study examines the interrelationships of big data, IoT security, and DL technologies, and draws parallels between these three areas. Technical works in all three fields have been compared, allowing for the development of a thematic taxonomy. Finally, we have laid the groundwork for further investigation into IoT security concerns by identifying and assessing the obstacles inherent in using DL for security utilizing big data. The security of large data has been taken into consideration in this article by categorizing various dangers using a deep learning method. The purpose of optimization is to raise both accuracy and performance

    Machine Learning for Unmanned Aerial System (UAS) Networking

    Get PDF
    Fueled by the advancement of 5G new radio (5G NR), rapid development has occurred in many fields. Compared with the conventional approaches, beamforming and network slicing enable 5G NR to have ten times decrease in latency, connection density, and experienced throughput than 4G long term evolution (4G LTE). These advantages pave the way for the evolution of Cyber-physical Systems (CPS) on a large scale. The reduction of consumption, the advancement of control engineering, and the simplification of Unmanned Aircraft System (UAS) enable the UAS networking deployment on a large scale to become feasible. The UAS networking can finish multiple complex missions simultaneously. However, the limitations of the conventional approaches are still a big challenge to make a trade-off between the massive management and efficient networking on a large scale. With 5G NR and machine learning, in this dissertation, my contributions can be summarized as the following: I proposed a novel Optimized Ad-hoc On-demand Distance Vector (OAODV) routing protocol to improve the throughput of Intra UAS networking. The novel routing protocol can reduce the system overhead and be efficient. To improve the security, I proposed a blockchain scheme to mitigate the malicious basestations for cellular connected UAS networking and a proof-of-traffic (PoT) to improve the efficiency of blockchain for UAS networking on a large scale. Inspired by the biological cell paradigm, I proposed the cell wall routing protocols for heterogeneous UAS networking. With 5G NR, the inter connections between UAS networking can strengthen the throughput and elasticity of UAS networking. With machine learning, the routing schedulings for intra- and inter- UAS networking can enhance the throughput of UAS networking on a large scale. The inter UAS networking can achieve the max-min throughput globally edge coloring. I leveraged the upper and lower bound to accelerate the optimization of edge coloring. This dissertation paves a way regarding UAS networking in the integration of CPS and machine learning. The UAS networking can achieve outstanding performance in a decentralized architecture. Concurrently, this dissertation gives insights into UAS networking on a large scale. These are fundamental to integrating UAS and National Aerial System (NAS), critical to aviation in the operated and unmanned fields. The dissertation provides novel approaches for the promotion of UAS networking on a large scale. The proposed approaches extend the state-of-the-art of UAS networking in a decentralized architecture. All the alterations can contribute to the establishment of UAS networking with CPS
    • …
    corecore