41 research outputs found

    Optimized load balancing for efficient resource provisioning in the cloud

    Get PDF
    Cloud Computing offers on demand provisioning of computing resources to users. Cloud service providers manage a large number of user requests to provide services according to user demands. Allocating and managing user requests to physical hardware is a challenging issue, because there is a need to create a load balance among available system resources. Effective load balancing saves operational costs, improves user satisfaction and leads to accelerate overall performance. In this paper, we propose an algorithm entitled Optimized Load Balancing (OLB), which aims to carry out efficient load balancing by improving processing and response time. We compared our proposed load balancing algorithm with an existing load balancing algorithm. Experimental results show that our proposed algorithm outperforms the existing one

    Evaluation of cloud brokering algorithms in cloud based data center

    Get PDF
    Migrating to the cloud for the purpose of minimizing infrastructure costs and technical effort is a common trend these days. Over the last couple of years, we have witnessed an increasing demand for cloud computing, which lead to the increase of both cloud users, and cloud service providers as well. Due to the chaos of the various service providers, it is very difficult for a user to choose the appropriate provider that satisfies their specific requirements. A new research paradigm has emerged in an attempt to address this problem. Cloud interoperability is another issue that allows users to easily migrate applications and workloads across cloud service providers. The definition of a common standard between cloud providers has the ability of eliminating this issue. Another approach is the use of a cloud service broker, which assists users to find appropriate cloud service providers before the deployment of their application or service. A broker is capable of finding an appropriate service provider that would satisfy user service requirements in terms of a service level agreement. In this paper, two new cloud brokering algorithms, and their initial evaluation, are proposed

    Multiple Linear Regression-Based Energy-Aware Resource Allocation in the Fog Computing Environment

    Full text link
    Fog computing is a promising computing paradigm for time-sensitive Internet of Things (IoT) applications. It helps to process data close to the users, in order to deliver faster processing outcomes than the Cloud; it also helps to reduce network traffic. The computation environment in the Fog computing is highly dynamic and most of the Fog devices are battery powered hence the chances of application failure is high which leads to delaying the application outcome. On the other hand, if we rerun the application in other devices after the failure it will not comply with time-sensitiveness. To solve this problem, we need to run applications in an energy-efficient manner which is a challenging task due to the dynamic nature of Fog computing environment. It is required to schedule application in such a way that the application should not fail due to the unavailability of energy. In this paper, we propose a multiple linear, regression-based resource allocation mechanism to run applications in an energy-aware manner in the Fog computing environment to minimise failures due to energy constraint. Prior works lack of energy-aware application execution considering dynamism of Fog environment. Hence, we propose A multiple linear regression-based approach which can achieve such objectives. We present a sustainable energy-aware framework and algorithm which execute applications in Fog environment in an energy-aware manner. The trade-off between energy-efficient allocation and application execution time has been investigated and shown to have a minimum negative impact on the system for energy-aware allocation. We compared our proposed method with existing approaches. Our proposed approach minimises the delay and processing by 20%, and 17% compared with the existing one. Furthermore, SLA violation decrease by 57% for the proposed energy-aware allocation.Comment: 8 Pages, 9 Figure

    Defending SDN against packet injection attacks using deep learning

    Full text link
    The (logically) centralised architecture of the software-defined networks makes them an easy target for packet injection attacks. In these attacks, the attacker injects malicious packets into the SDN network to affect the services and performance of the SDN controller and overflow the capacity of the SDN switches. Such attacks have been shown to ultimately stop the network functioning in real-time, leading to network breakdowns. There have been significant works on detecting and defending against similar DoS attacks in non-SDN networks, but detection and protection techniques for SDN against packet injection attacks are still in their infancy. Furthermore, many of the proposed solutions have been shown to be easily by-passed by simple modifications to the attacking packets or by altering the attacking profile. In this paper, we develop novel Graph Convolutional Neural Network models and algorithms for grouping network nodes/users into security classes by learning from network data. We start with two simple classes - nodes that engage in suspicious packet injection attacks and nodes that are not. From these classes, we then partition the network into separate segments with different security policies using distributed Ryu controllers in an SDN network. We show in experiments on an emulated SDN that our detection solution outperforms alternative approaches with above 99\% detection accuracy on various types (both old and new) of injection attacks. More importantly, our mitigation solution maintains continuous functions of non-compromised nodes while isolating compromised/suspicious nodes in real-time. All code and data are publicly available for reproducibility of our results.Comment: 15 Pages, 15 Figure

    IoT-based emergency vehicle services in intelligent transportation system

    Get PDF
    Emergency Management System (EMS) is an important component of Intelligent transportation systems, and its primary objective is to send Emergency Vehicles (EVs) to the location of a reported incident. However, the increasing traffic in urban areas, especially during peak hours, results in the delayed arrival of EVs in many cases, which ultimately leads to higher fatality rates, increased property damage, and higher road congestion. Existing literature addressed this issue by giving higher priority to EVs while traveling to an incident place by changing traffic signals (e.g., making the signals green) on their travel path. A few works have also attempted to find the best route for an EV using traffic information (e.g., number of vehicles, flow rate, and clearance time) at the beginning of the journey. However, these works did not consider congestion or disruption faced by other non-emergency vehicles adjacent to the EV travel path. The selected travel paths are also static and do not consider changing traffic parameters while EVs are en route. To address these issues, this article proposes an Unmanned Aerial Vehicle (UAV) guided priority-based incident management system to assist EVs in obtaining a better clearance time in intersections and thus achieve a lower response time. The proposed model also considers disruption faced by other surrounding non-emergency vehicles adjacent to the EVs’ travel path and selects an optimal solution by controlling the traffic signal phase time to ensure that EVs can reach the incident place on time while causing minimal disruption to other on-road vehicles. Simulation results indicate that the proposed model achieves an 8% lower response time for EVs while the clearance time surrounding the incident place is improved by 12%
    corecore