8,870 research outputs found

    An Energy-driven Network Function Virtualization for Multi-domain Software Defined Networks

    Full text link
    Network Functions Virtualization (NFV) in Software Defined Networks (SDN) emerged as a new technology for creating virtual instances for smooth execution of multiple applications. Their amalgamation provides flexible and programmable platforms to utilize the network resources for providing Quality of Service (QoS) to various applications. In SDN-enabled NFV setups, the underlying network services can be viewed as a series of virtual network functions (VNFs) and their optimal deployment on physical/virtual nodes is considered a challenging task to perform. However, SDNs have evolved from single-domain to multi-domain setups in the recent era. Thus, the complexity of the underlying VNF deployment problem in multi-domain setups has increased manifold. Moreover, the energy utilization aspect is relatively unexplored with respect to an optimal mapping of VNFs across multiple SDN domains. Hence, in this work, the VNF deployment problem in multi-domain SDN setup has been addressed with a primary emphasis on reducing the overall energy consumption for deploying the maximum number of VNFs with guaranteed QoS. The problem in hand is initially formulated as a "Multi-objective Optimization Problem" based on Integer Linear Programming (ILP) to obtain an optimal solution. However, the formulated ILP becomes complex to solve with an increasing number of decision variables and constraints with an increase in the size of the network. Thus, we leverage the benefits of the popular evolutionary optimization algorithms to solve the problem under consideration. In order to deduce the most appropriate evolutionary optimization algorithm to solve the considered problem, it is subjected to different variants of evolutionary algorithms on the widely used MOEA framework (an open source java framework based on multi-objective evolutionary algorithms).Comment: Accepted for publication in IEEE INFOCOM 2019 Workshop on Intelligent Cloud Computing and Networking (ICCN 2019

    Big data analytics:Computational intelligence techniques and application areas

    Get PDF
    Big Data has significant impact in developing functional smart cities and supporting modern societies. In this paper, we investigate the importance of Big Data in modern life and economy, and discuss challenges arising from Big Data utilization. Different computational intelligence techniques have been considered as tools for Big Data analytics. We also explore the powerful combination of Big Data and Computational Intelligence (CI) and identify a number of areas, where novel applications in real world smart city problems can be developed by utilizing these powerful tools and techniques. We present a case study for intelligent transportation in the context of a smart city, and a novel data modelling methodology based on a biologically inspired universal generative modelling approach called Hierarchical Spatial-Temporal State Machine (HSTSM). We further discuss various implications of policy, protection, valuation and commercialization related to Big Data, its applications and deployment

    Content-aware resource allocation model for IPTV delivery networks

    Get PDF
    Nowadays, with the evolution of digital video broadcasting, as well as, the advent of high speed broadband networks, a new era of TV services has emerged known as IPTV. IPTV is a system that employs the high speed broadband networks to deliver TV services to the subscribers. From the service provider viewpoint, the challenge in IPTV systems is how to build delivery networks that exploits the resources efficiently and reduces the service cost, as well. However, designing such delivery networks affected by many factors including choosing the suitable network architecture, load balancing, resources waste, and cost reduction. Furthermore, IPTV contents characteristics, particularly; size, popularity, and interactivity play an important role in balancing the load and avoiding the resources waste for delivery networks. In this paper, we investigate the problem of resource allocation for IPTV delivery networks over the recent architecture, peer-service area architecture. The Genetic Algorithm as an optimization tool has been used to find the optimal provisioning parameters including storage, bandwidth, and CPU consumption. The experiments have been conducted on two data sets with different popularity distributions. The experiments have been conducted on two popularity distributions. The experimental results showed the impact of content status on the resource allocation process

    InterCloud: Utility-Oriented Federation of Cloud Computing Environments for Scaling of Application Services

    Full text link
    Cloud computing providers have setup several data centers at different geographical locations over the Internet in order to optimally serve needs of their customers around the world. However, existing systems do not support mechanisms and policies for dynamically coordinating load distribution among different Cloud-based data centers in order to determine optimal location for hosting application services to achieve reasonable QoS levels. Further, the Cloud computing providers are unable to predict geographic distribution of users consuming their services, hence the load coordination must happen automatically, and distribution of services must change in response to changes in the load. To counter this problem, we advocate creation of federated Cloud computing environment (InterCloud) that facilitates just-in-time, opportunistic, and scalable provisioning of application services, consistently achieving QoS targets under variable workload, resource and network conditions. The overall goal is to create a computing environment that supports dynamic expansion or contraction of capabilities (VMs, services, storage, and database) for handling sudden variations in service demands. This paper presents vision, challenges, and architectural elements of InterCloud for utility-oriented federation of Cloud computing environments. The proposed InterCloud environment supports scaling of applications across multiple vendor clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that federated Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios.Comment: 20 pages, 4 figures, 3 tables, conference pape

    Distributed Environment for Efficient Virtual Machine Image Management in Federated Cloud Architectures

    Get PDF
    The use of Virtual Machines (VM) in Cloud computing provides various benefits in the overall software engineering lifecycle. These include efficient elasticity mechanisms resulting in higher resource utilization and lower operational costs. VM as software artifacts are created using provider-specific templates, called VM images (VMI), and are stored in proprietary or public repositories for further use. However, some technology specific choices can limit the interoperability among various Cloud providers and bundle the VMIs with nonessential or redundant software packages, leading to increased storage size, prolonged VMI delivery, stagnant VMI instantiation and ultimately vendor lock-in. To address these challenges, we present a set of novel functionalities and design approaches for efficient operation of distributed VMI repositories, specifically tailored for enabling: (i) simplified creation of lightweight and size optimized VMIs tuned for specific application requirements; (ii) multi-objective VMI repository optimization; and (iii) efficient reasoning mechanism to help optimizing complex VMI operations. The evaluation results confirm that the presented approaches can enable VMI size reduction by up to 55%, while trimming the image creation time by 66%. Furthermore, the repository optimization algorithms, can reduce the VMI delivery time by up to 51% and cut down the storage expenses by 3%. Moreover, by implementing replication strategies, the optimization algorithms can increase the system reliability by 74%
    • …
    corecore