2,239 research outputs found

    CloudMon: a resource-efficient IaaS cloud monitoring system based on networked intrusion detection system virtual appliances

    Get PDF
    The networked intrusion detection system virtual appliance (NIDS-VA), also known as virtualized NIDS, plays an important role in the protection and safeguard of IaaS cloud environments. However, it is nontrivial to guarantee both of the performance of NIDS-VA and the resource efficiency of cloud applications because both are sharing computing resources in the same cloud environment. To overcome this challenge and trade-off, we propose a novel system, named CloudMon, which enables dynamic resource provision and live placement for NIDS-VAs in IaaS cloud environments. CloudMon provides two techniques to maintain high resource efficiency of IaaS cloud environments without degrading the performance of NIDS-VAs and other virtual machines (VMs). The first technique is a virtual machine monitor based resource provision mechanism, which can minimize the resource usage of a NIDS-VA with given performance guarantee. It uses a fuzzy model to characterize the complex relationship between performance and resource demands of a NIDS-VA and develops an online fuzzy controller to adaptively control the resource allocation for NIDS-VAs under varying network traffic. The second one is a global resource scheduling approach for optimizing the resource efficiency of the entire cloud environments. It leverages VM migration to dynamically place NIDS-VAs and VMs. An online VM mapping algorithm is designed to maximize the resource utilization of the entire cloud environment. Our virtual machine monitor based resource provision mechanism has been evaluated by conducting comprehensive experiments based on Xen hypervisor and Snort NIDS in a real cloud environment. The results show that the proposed mechanism can allocate resources for a NIDS-VA on demand while still satisfying its performance requirements. We also verify the effectiveness of our global resource scheduling approach by comparing it with two classic vector packing algorithms, and the results show that our approach improved the resource utilization of cloud environments and reduced the number of in-use NIDS-VAs and physical hosts.The authors gratefully acknowledge the anonymous reviewers for their helpful suggestions and insightful comments to improve the quality of the paper. The work reported in this paper has been partially supported by National Nature Science Foundation of China (No. 61202424, 61272165, 91118008), China 863 program (No. 2011AA01A202), Natural Science Foundation of Jiangsu Province of China (BK20130528) and China 973 Fundamental R&D Program (2011CB302600)

    The 5th Conference of PhD Students in Computer Science

    Get PDF

    Operational Research IO2017, Valença, Portugal, June 28-30

    Get PDF
    This proceedings book presents selected contributions from the XVIII Congress of APDIO (the Portuguese Association of Operational Research) held in Valença on June 28–30, 2017. Prepared by leading Portuguese and international researchers in the field of operations research, it covers a wide range of complex real-world applications of operations research methods using recent theoretical techniques, in order to narrow the gap between academic research and practical applications. Of particular interest are the applications of, nonlinear and mixed-integer programming, data envelopment analysis, clustering techniques, hybrid heuristics, supply chain management, and lot sizing and job scheduling problems. In most chapters, the problems, methods and methodologies described are complemented by supporting figures, tables and algorithms. The XVIII Congress of APDIO marked the 18th installment of the regular biannual meetings of APDIO – the Portuguese Association of Operational Research. The meetings bring together researchers, scholars and practitioners, as well as MSc and PhD students, working in the field of operations research to present and discuss their latest works. The main theme of the latest meeting was Operational Research Pro Bono. Given the breadth of topics covered, the book offers a valuable resource for all researchers, students and practitioners interested in the latest trends in this field.info:eu-repo/semantics/publishedVersio

    A Multi-objective Optimization Model for Virtual Machine Mapping in Cloud Data Centres

    Full text link
    © 2016 IEEE. Modern cloud computing environments exploit virtualization for efficient resource management to reduce computational cost and energy budget. Virtual machine (VM) migration is a technique that enables flexible resource allocation and increases the computation power and communication capability within cloud data centers. VM migration helps cloud providers to successfully achieve various resource management objectives such as load balancing, power management, fault tolerance, and system maintenance. However, the VM migration process can affect the performance of applications unless it is supported by smart optimization methods. This paper presents a multi-objective optimization model to address this issue. The objectives are to minimize power consumption, maximize resource utilization (or minimize idle resources), and minimize VM transfer time. Fuzzy particle swarm optimization (PSO), which improves the efficiency of conventional PSO by using fuzzy logic systems, is relied upon to solve the optimization problem. The model is implemented in a cloud simulator to investigate its performance, and the results verify the performance improvement of the proposed model

    Language and Compiler Support for Auto-Tuning Variable-Accuracy Algorithms

    Get PDF
    Approximating ideal program outputs is a common technique for solving computationally difficult problems, for adhering to processing or timing constraints, and for performance optimization in situations where perfect precision is not necessary. To this end, programmers often use approximation algorithms, iterative methods, data resampling, and other heuristics. However, programming such variable accuracy algorithms presents difficult challenges since the optimal algorithms and parameters may change with different accuracy requirements and usage environments. This problem is further compounded when multiple variable accuracy algorithms are nested together due to the complex way that accuracy requirements can propagate across algorithms and because of the resulting size of the set of allowable compositions. As a result, programmers often deal with this issue in an ad-hoc manner that can sometimes violate sound programming practices such as maintaining library abstractions. In this paper, we propose language extensions that expose trade-offs between time and accuracy to the compiler. The compiler performs fully automatic compile-time and install-time autotuning and analyses in order to construct optimized algorithms to achieve any given target accuracy. We present novel compiler techniques and a structured genetic tuning algorithm to search the space of candidate algorithms and accuracies in the presence of recursion and sub-calls to other variable accuracy code. These techniques benefit both the library writer, by providing an easy way to describe and search the parameter and algorithmic choice space, and the library user, by allowing high level specification of accuracy requirements which are then met automatically without the need for the user to understand any algorithm-specific parameters. Additionally, we present a new suite of benchmarks, written in our language, to examine the efficacy of our techniques. Our experimental results show that by relaxing accuracy requirements, we can easily obtain performance improvements ranging from 1.1x to orders of magnitude of speedup

    Language and Compiler Support for Auto-Tuning Variable-Accuracy Algorithms

    Get PDF
    Approximating ideal program outputs is a common technique for solving computationally difficult problems, for adhering to processing or timing constraints, and for performance optimization in situations where perfect precision is not necessary. To this end, programmers often use approximation algorithms, iterative methods, data resampling, and other heuristics. However, programming such variable accuracy algorithms presents difficult challenges since the optimal algorithms and parameters may change with different accuracy requirements and usage environments. This problem is further compounded when multiple variable accuracy algorithms are nested together due to the complex way that accuracy requirements can propagate across algorithms and because of the size of the set of allowable compositions. As a result, programmers often deal with this issue in an ad-hoc manner that can sometimes violate sound programming practices such as maintaining library abstractions. In this paper, we propose language extensions that expose trade-offs between time and accuracy to the compiler. The compiler performs fully automatic compile-time and installtime autotuning and analyses in order to construct optimized algorithms to achieve any given target accuracy. We present novel compiler techniques and a structured genetic tuning algorithm to search the space of candidate algorithms and accuracies in the presence of recursion and sub-calls to other variable accuracy code. These techniques benefit both the library writer, by providing an easy way to describe and search the parameter and algorithmic choice space, and the library user, by allowing high level specification of accuracy requirements which are then met automatically without the need for the user to understand any algorithm-specific parameters. Additionally, we present a new suite of benchmarks, written in our language, to examine the efficacy of our techniques. Our experimental results show that by relaxing accuracy requirements , we can easily obtain performance improvements ranging from 1.1× to orders of magnitude of speedup

    A Polyhedral Study of Mixed 0-1 Set

    Get PDF
    We consider a variant of the well-known single node fixed charge network flow set with constant capacities. This set arises from the relaxation of more general mixed integer sets such as lot-sizing problems with multiple suppliers. We provide a complete polyhedral characterization of the convex hull of the given set

    Type-2 Fuzzy Single and Multi-Objective Optimisation Systems for Telecommunication Capacity Planning

    Get PDF
    Capacity planning in the telecommunications industry aims to maximise the effectiveness of implemented bandwidth equipment whilst allowing for equipment to be upgraded without a loss of service. The better implemented hardware can be configured, the better the service provided to the consumers can be. Additionally, the easier it is to rearrange that existing hardware with minimum loss of service to the consumer, the easier it is to remove older equipment and replace it with newer more effect equipment. The newer equipment can provide more bandwidth whilst consuming less power and producing less heat, lowering the overall operating costs and carbon footprint of a large scale network. Resilient routing is the idea of providing multiple independent non-intersecting routes between two locations within a graph. For telecommunications organisations this can be used to reduce the downtime faced by consumers if there is a fault within a network. It can also be used to provide assurances to customers that rely on a network connection such as: financial institutions or government agencies. This thesis looks at capacity planning within telecommunications with the aspiration of creating a set of optimisation systems that can rearrange data exchange hardware to maximise their performance with minimal cost and minimising downtime while allowing adaptations to an exchange’s configuration in order to perform upgrades. The proposed systems were developed with data from British Telecom (BT) and are either deployed or are planned to be in the near future. In many cases the data used is confidential, but when this is the case an equivalent open source data set has been used for transparency. As a result of this thesis the Heated Stack (HS) algorithm was created which has been shown to outperform the popular and successful NSGA-II algorithm by up to 92 % and NSGA-III by up to 69% at general optimisation tasks. HS also outperforms NSGA-II in 100% of the physical capacity planning experiments run and NSGA-II in 68% of the physical capacity planning experiments run. Additionally, as a result of this thesis the N-Non-Intersecting-Routing algorithm was shown to outperform Dijkstra’s algorithm by up to 38% at resilient routing. Finally, a new method of performing configuration planning through backwards induction with Monte Carlo Tree Search was proposed
    corecore