16 research outputs found
ENPP: Extended Non-preemptive PP-aware Scheduling for Real-time Cloud Services
By increasing the use of cloud services and the number of requests to processing tasks with minimum time and costs, the resource allocation and scheduling, especially in real-time applications become more challenging. The problem of resource scheduling, is one of the most important scheduling problems in the area of NP-hard problems. In this paper, we propose an efficient algorithm is proposed to schedule real-time cloud services by considering the resource constraints. The simulation results show that the proposed algorithm shorten the processing time of tasks and decrease the number of canceled tasks
An Effiecient Approach for Resource Auto-Scaling in Cloud Environments
Cloud services have become more popular among users these days. Automatic resource provisioning for cloud services is one of the important challenges in cloud environments. In the cloud computing environment, resource providers shall offer required resources to users automatically without any limitations. It means whenever a user needs more resources, the required resources should be dedicated to the users without any problems. On the other hand, if resources are more than user’s needs extra resources should be turn off temporarily and turn back on whenever they needed. In this paper, we propose an automatic resource provisioning approach based on reinforcement learning for auto-scaling resources according to Markov Decision Process (MDP). Simulation Results show that the rate of Service Level Agreement (SLA) violation and stability that the proposed approach better performance compared to the similar approaches
Context-aware multi-user offloading in mobile edge computing: A federated learning-based approach
Mobile edge computing (MEC) provides aneffective solution to help the Internet of Things (IoT)devices with delay-sensitive and computation-intensivetasks by offering computing capabilities in the proximityof mobile device users. Most of the existing studies ignorecontext information of the application, requests, sensors,resources, and network. However, in practice, contextinformation has a significant impact on offloading decisions.In this paper, we consider context-aware offloadingin MEC with multi-user. The contexts are collected usingautonomous management as the MAPE loop in alloffloading processes. Also, federated learning (FL)-basedoffloading is presented. Our learning method in mobiledevices (MDs) is deep reinforcement learning (DRL). FLhelps us to use distributed capabilities of MEC with updatedweights between MDs and edge devices (Eds). Thesimulation results indicate our method is superior to localcomputing, offload, and FL without considering contextawarealgorithms in terms of energy consumption, executioncost, network usage, delay, and fairness
Data pipeline approaches in serverless computing: a taxonomy, review, and research trends
Abstract Serverless computing has gained significant popularity due to its scalability, cost-effectiveness, and ease of deployment. With the exponential growth of data, organizations face the challenge of efficiently processing and analyzing vast amounts of data in a serverless environment. Data pipelines play a crucial role in managing and transforming data within serverless architectures. This paper provides a taxonomy of data pipeline approaches in serverless computing. Classification is based on architectural features, data processing techniques, and workflow orchestration mechanisms, these approaches are categorized into three primary methods: heuristic-based approach, Machine learning-based approach, and framework-based approach. Furthermore, a systematic review of existing data pipeline frameworks and tools is provided, encompassing their strengths, limitations, and real-world use cases. The advantages and disadvantages of each approach, also the challenges and performance metrics that influence their effectuality have been examined. Every data pipeline approach has certain advantages and disadvantages, whether it is framework-based, heuristic-based, or machine learning-based. Each approach is suitable for specific use cases. Hence, it is crucial assess the trade-offs between complexity, performance, cost, and scalability, while selecting a data pipeline approach. In the end, the paper highlights a number of open issues and future investigations directions for data pipeline in the serverless computing, which involve scalability, fault tolerance, data real time processing, data workflow orchestration, function state management with performance and cost in the serverless computing environments
An Evolutionary Multi-objective Optimization Technique to Deploy the IoT Services in Fog-enabled Networks: An Autonomous Approach
The Internet of Things (IoT) generates countless amounts of data, much of which is processed in cloud data centers. When data is transferred to the cloud over longer distances, there is a long latency in IoT services. Therefore, in order to increase the speed of service provision, resources should be placed close to the user (i.e., at the edge of the network). To address this challenge, a new paradigm called Fog Computing was introduced and added as a layer in the IoT architecture. Fog computing is a decentralized computing infrastructure in which provides storage and computing in the vicinity of IoT devices instead of sending to the cloud. Hence, fog computing can provide less latency and better Quality of Service (QoS) for real-time applications than cloud computing. In general, the theoretical foundations of fog computing have already been presented, but the problem of IoT services placement to fog nodes is still challenging and has attracted much attention from researchers. In this paper, a conceptual computing framework based on fog-cloud control middleware is proposed to optimally IoT services placement. Here, this problem is formulated as an automated planning model for managing service requests due to some limitations that take into account the heterogeneity of applications and resources. To solve the problem of IoT services placement, an automated evolutionary approach based on Particle Swarm Optimization (PSO) has been proposed with the aim of making maximize the utilization of fog resources and improving QoS. Experimental studies on a synthetic environment have been evaluated based on various metrics including services performed, waiting time, failed services, services cost, services remaining, and runtime. The results of the comparisons showed that the proposed framework based on PSO performs better than the state-of-the-art methods
Dynamic Resource Allocation Using Improved Firefly Optimization Algorithm in Cloud Environment
Today, cloud computing has provided a suitable platform to meet the computing needs of users. One of the most important challenges facing cloud computing is Dynamic Resource Allocation (DSA), which is in the NP-Hard class. One of the goals of the DSA is to utilization resources efficiently and maximize productivity. In this paper, an improved Firefly algorithm based on load balancing optimization is introduced to solve the DSA problem called IFA-DSA. In addition to balancing workloads between existing virtual machines, IFA-DSA also reduces completion time by selecting appropriate objectives in the fitness function. The best sequence of tasks for resource allocation is formulated as a multi-objective problem. The intended objectives are load balancing, completion time, average runtime, and migration rate. In order to improve the initial population creation in the firefly algorithm, a heuristic method is used instead of a random approach. In the heuristic method, the initial population is created based on the priority of tasks, where the priority of each task is determined based on the pay as you use model and a fuzzy approach. The results of the experiments show the superiority of the proposed method in the makespan criterion over the ICFA method by an average of 3%
Light-Edge: A Lightweight Authentication Protocol for IoT Devices in an Edge-Cloud Environment
Due to the ever-growing use of active Internet devices, the Internet has achieved good popularity at present. The smart devices could connect to the Internet and communicate together that shape the Internet of Things (IoT). Such smart devices are generating data and are connecting to each other through edge-cloud infrastructure. Authentication of the IoT devices plays a critical role in the success of the integration of IoT, edge, and cloud computing technologies. The complexity and attack resistance of the authentication protocols are still the main challenges. Motivated by this, this paper introduces a lightweight authentication protocol for IoT devices named Light-Edge using a three-layer scheme, including IoT device layer, trust center at the edge layer, and cloud service providers. The results show the superiority of the proposed protocol against other approaches in terms of attack resistance, communication cost, and time cost