13 research outputs found

    Service Function Graph Design And Embedding In Next Generation Internet

    Get PDF
    Network Function Virtualization (NFV) and Software Defined Networking (SDN) are viewed as the techniques to design, deploy and manage future Internet services. NFV provides an effective way to decouple network functions from the proprietary hardware, allowing the network providers to implement network functions as virtual machines running on standard servers. In the NFV environment, an NFV service request is provisioned in the form of a Service Function Graph (SFG). The SFG defines the exact set of actions or Virtual Network Functions (VNFs) that the data stream from the service request is subjected to. These actions or VNFs need to be embedded onto specific physical (substrate) networks to provide network services for end users. Similarly, SDN decouples the control plane from network devices such as routers and switches. The network control management is performed via an open interface and the underlying infrastructure turned into simple programmable forwarding devices. NFV and SDN are complementary to each other. Specifically, similar to running network functions on general purpose servers, SDN control plane can be implemented as pure software running on industry standard hardware. Moreover, automation and virtualization provide both NFV and SDN the tools to achieve their respective goals. In this dissertation, we motivate the importance of service function graph design, and we focus our attention on the problem of embedding network service requests. Throughout the dissertation, we highlight the unique properties of the service requests and investigate how to efficiently design and embed an SFG for a service request onto substrate network. We address variations of the embedding service requests such as dependence awareness and branch awareness in service function graph design and embedding. We propose novel algorithms to design and embed service requests with dependence and branch awareness. We also provide the intuition behind our proposed schemes and analyze our suggested approaches over multiple metrics against other embedding techniques

    A Generalization of the Directed Graph Layering Problem

    Get PDF
    The Directed Layering Problem (DLP) solves a step of the widely used layer-based layout approach to automatically draw directed acyclic graphs. To cater for cyclic graphs, classically a preprocessing step is used that solves the Feedback Arc Set Problem (FASP)to make the graph acyclic before a layering is determined. Here, we present the Generalized Layering Problem (GLP) which solves the combination of DLP and FASP simultaneously, allowing general graphs as input. We show GLP to be NP- complete, present integer programming models to solve it, and perform thorough evaluations on different sets of graphs and with different implementations for the steps of the layer- based approach. We observe that GLP reduces the number of dummy nodes significantly, can produce more compact drawings and improves on graphs where DLP yields poor aspect ratios

    Optimization of routing-based clustering approaches in wireless sensor network: Review and open research issues

    Full text link
    © 2020 by the authors. Licensee MDPI, Basel, Switzerland. In today’s sensor network research, numerous technologies are used for the enhancement of earlier studies that focused on cost-effectiveness in addition to time-saving and novel approaches. This survey presents complete details about those earlier models and their research gaps. In general, clustering is focused on managing the energy factors in wireless sensor networks (WSNs). In this study, we primarily concentrated on multihop routing in a clustering environment. Our study was classified according to cluster-related parameters and properties and is subdivided into three approach categories: (1) parameter-based, (2) optimization-based, and (3) methodology-based. In the entire category, several techniques were identified, and the concept, parameters, advantages, and disadvantages are elaborated. Based on this attempt, we provide useful information to the audience to be used while they investigate their research ideas and to develop a novel model in order to overcome the drawbacks that are present in the WSN-based clustering models

    Dynamic Hierarchical Graph Drawing

    Get PDF

    Multi-Quality Auto-Tuning by Contract Negotiation

    Get PDF
    A characteristic challenge of software development is the management of omnipresent change. Classically, this constant change is driven by customers changing their requirements. The wish to optimally leverage available resources opens another source of change: the software systems environment. Software is tailored to specific platforms (e.g., hardware architectures) resulting in many variants of the same software optimized for different environments. If the environment changes, a different variant is to be used, i.e., the system has to reconfigure to the variant optimized for the arisen situation. The automation of such adjustments is subject to the research community of self-adaptive systems. The basic principle is a control loop, as known from control theory. The system (and environment) is continuously monitored, the collected data is analyzed and decisions for or against a reconfiguration are computed and realized. Central problems in this field, which are addressed in this thesis, are the management of interdependencies between non-functional properties of the system, the handling of multiple criteria subject to decision making and the scalability. In this thesis, a novel approach to self-adaptive software--Multi-Quality Auto-Tuning (MQuAT)--is presented, which provides design and operation principles for software systems which automatically provide the best possible utility to the user while producing the least possible cost. For this purpose, a component model has been developed, enabling the software developer to design and implement self-optimizing software systems in a model-driven way. This component model allows for the specification of the structure as well as the behavior of the system and is capable of covering the runtime state of the system. The notion of quality contracts is utilized to cover the non-functional behavior and, especially, the dependencies between non-functional properties of the system. At runtime the component model covers the runtime state of the system. This runtime model is used in combination with the contracts to generate optimization problems in different formalisms (Integer Linear Programming (ILP), Pseudo-Boolean Optimization (PBO), Ant Colony Optimization (ACO) and Multi-Objective Integer Linear Programming (MOILP)). Standard solvers are applied to derive solutions to these problems, which represent reconfiguration decisions, if the identified configuration differs from the current. Each approach is empirically evaluated in terms of its scalability showing the feasibility of all approaches, except for ACO, the superiority of ILP over PBO and the limits of all approaches: 100 component types for ILP, 30 for PBO, 10 for ACO and 30 for 2-objective MOILP. In presence of more than two objective functions the MOILP approach is shown to be infeasible

    SHARING WITH LIVE MIGRATION ENERGY OPTIMIZATION TASK SCHEDULER FOR CLOUD COMPUTING DATACENTRES

    Get PDF
    The use of cloud computing is expanding, and it is becoming the driver for innovation in all companies to serve their customers around the world. A big attention was drawn to the huge energy that was consumed within those datacentres recently neglecting the energy consumption in the rest of the cloud components. Therefore, the energy consumption should be reduced to minimize performance losses, achieve the target battery lifetime, satisfy performance requirements, minimize power consumption, minimize the CO2 emissions, maximize the profit, and maximize resource utilization. Reducing power consumption in the cloud computing datacentres can be achieved by many ways such as managing or utilizing the resources, controlling redundancy, relocating datacentres, improvement of applications or dynamic voltage and frequency scaling. One of the most efficient ways to reduce power is to use a scheduling technique that will find the best task execution order based on the users demands and with the minimum execution time and cloud resources. It is quite a challenge in cloud environment to design an effective and an efficient task scheduling technique which is done based on the user requirements. The scheduling process is not an easy task because within the datacentre there is dissimilar hardware with different capacities and, to improve the resource utilization, an efficient scheduling algorithm must be applied on the incoming tasks to achieve efficient computing resource allocating and power optimization. The scheduler must maintain the balance between the Quality of Service and fairness among the jobs so that the efficiency may be increased. The aim of this project is to propose a novel method for optimizing energy usage in cloud computing environments that satisfy the Quality of Service (QoS) and the regulations of the Service Level Agreement (SLA). Applying a power- and resource-optimised scheduling algorithm will assist to control and improve the process of mapping between the datacentre servers and the incoming tasks and achieve the optimal deployment of the data centre resources to achieve good computing efficiency, network load minimization and reducing the energy consumption in the datacentre. This thesis explores cloud computing energy aware datacentre structures with diverse scheduling heuristics and propose a novel job scheduling technique with sharing and live migration based on file locality (SLM) aiming to maximize efficiency and save power consumed in the datacentre due to bandwidth usage utilization, minimizing the processing time and the system total make span. The propose SLM energy efficient scheduling strategy have four basic algorithms: 1) Job Classifier, 2) SLM job scheduler, 3) Dual fold VM virtualization and 4) VM threshold margins and consolidation. The SLM job classifier worked on categorising the incoming set of user requests to the datacentre in to two different queues based on these requests type and the source file needed to process them. The processing time of each job fluctuate based on the job type and the number of instructions for each job. The second algorithm, which is the SLM scheduler algorithm, dispatch jobs from both queues according to job arrival time and control the allocation process to the most appropriate and available VM based on job similarity according to a predefined synchronized job characteristic table (SJC). The SLM scheduler uses a replicated host’s infrastructure to save the wasted idle hosts energy by maximizing the basic host’s utilization as long as the system can deal with workflow while setting replicated hosts on off mode. The third SLM algorithm, the dual fold VM algorithm, divide the active VMs in to a top and low level slots to allocate similar jobs concurrently which maximize the host utilization at high workload and reduce the total make span. The VM threshold margins and consolidation algorithm set an upper and lower threshold margin as a trigger for VMs consolidation and load balancing process among running VMs, and deploy a continuous provisioning of overload and underutilize VMs detection scheme to maintain and control the system workload balance. The consolidation and load balancing is achieved by performing a series of dynamic live migrations which provides auto-scaling for the servers with in the datacentres. This thesis begins with cloud computing overview then preview the conceptual cloud resources management strategies with classification of scheduling heuristics. Following this, a Competitive analysis of energy efficient scheduling algorithms and related work is presented. The novel SLM algorithm is proposed and evaluated using the CloudSim toolkit under number of scenarios, then the result compared to Particle Swarm Optimization algorithm (PSO) and Ant Colony Algorithm (ACO) shows a significant improvement in the energy usage readings levels and total make span time which is the total time needed to finish processing all the tasks

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field

    Resilient scalable internet routing and embedding algorithms

    Get PDF

    Applying ant colony optimization metaheuristic to the DAG layering problem

    Get PDF
    This paper 1 presents the design and implementation of an Ant Colony Optimization based algorithm for solving the DAG Layering Problem. This algorithm produces compact layerings by minimising their width and height. Importantly it takes into account the contribution of dummy vertices to the width of the resulting layering
    corecore