1,792 research outputs found

    Cloud engineering is search based software engineering too

    Get PDF
    Many of the problems posed by the migration of computation to cloud platforms can be formulated and solved using techniques associated with Search Based Software Engineering (SBSE). Much of cloud software engineering involves problems of optimisation: performance, allocation, assignment and the dynamic balancing of resources to achieve pragmatic trade-offs between many competing technical and business objectives. SBSE is concerned with the application of computational search and optimisation to solve precisely these kinds of software engineering challenges. Interest in both cloud computing and SBSE has grown rapidly in the past five years, yet there has been little work on SBSE as a means of addressing cloud computing challenges. Like many computationally demanding activities, SBSE has the potential to benefit from the cloud; ‘SBSE in the cloud’. However, this paper focuses, instead, of the ways in which SBSE can benefit cloud computing. It thus develops the theme of ‘SBSE for the cloud’, formulating cloud computing challenges in ways that can be addressed using SBSE

    ENERGY EFFICIENT WIRED NETWORKING

    Get PDF
    This research proposes a new dynamic energy management framework for a backbone Internet Protocol over Dense Wavelength Division Multiplexing (IP over DWDM) network. Maintaining the logical IP-layer topology is a key constraint of our architecture whilst saving energy by infrastructure sleeping and virtual router migration. The traffic demand in a Tier 2/3 network typically has a regular diurnal pattern based on people‟s activities, which is high in working hours and much lighter during hours associated with sleep. When the traffic demand is light, virtual router instances can be consolidated to a smaller set of physical platforms and the unneeded physical platforms can be put to sleep to save energy. As the traffic demand increases the sleeping physical platforms can be re-awoken in order to host virtual router instances and so maintain quality of service. Since the IP-layer topology remains unchanged throughout virtual router migration in our framework, there is no network disruption or discontinuities when the physical platforms enter or leave hibernation. However, this migration places extra demands on the optical layer as additional connections are needed to preserve the logical IP-layer topology whilst forwarding traffic to the new virtual router location. Consequently, dynamic optical connection management is needed for the new framework. Two important issues are considered in the framework, i.e. when to trigger the virtual router migration and where to move virtual router instances to? For the first issue, a reactive mechanism is used to trigger the virtual router migration by monitoring the network state. Then, a new evolutionary-based algorithm called VRM_MOEA is proposed for solving the destination physical platform selection problem, which chooses the appropriate location of virtual router instances as traffic demand varies. A novel hybrid simulation platform is developed to measure the performance of new framework, which is able to capture the functionality of the optical layer, the IP layer data-path and the IP/optical control plane. Simulation results show that the performance of network energy saving depends on many factors, such as network topology, quiet and busy thresholds, and traffic load; however, savings of around 30% are possible with typical medium-sized network topologies

    A Predictive Approach for the Efficient Distribution of Agent-Based Systems on a Hybrid-Cloud

    Get PDF
    International audienceHybrid clouds are increasingly used to outsource non-critical applications to public clouds. However, the main challenge within such environments, is to ensure a cost-efficient distribution of the systems between the resources that are on/off premises. For Multi Agent Systems (MAS), this challenge is deepened due to irregular workload progress and intensive communication between the agents, which may result in high computing and data transfer costs. Thus, in this paper we propose a generic framework for adaptive cost-efficient deployment of MAS with a special focus on hybrid clouds. The framework is based mainly on the use of a performance evaluation process that consists of simulating various partitioning options to estimate and optimize the overall deployment costs. Further, to cope with the irregular workload changes within a MAS and dynamically adapt its initial deployment, we propose an extended version of the Fiduccia-Mattheyses algorithm (E-FM). The experimental results highlight the efficiency of E-FM and show that an efficient MAS deployment to hybrid clouds depends on various factors such as the cloud providers and their different cost-models, the network state, the used partitioning algorithm, and the initial deployment

    CDOXplorer: Simulation-based genetic optimization of software deployment and reconfiguration in the cloud

    Get PDF
    Migrating existing enterprise software to cloud platforms involves the comparison of various cloud deployment options (CDOs). A CDO comprises a combination of a specific cloud environment, deployment architecture, and runtime reconfiguration rules for dynamic resource scaling. Our simulator CDOSim can evaluate CDOs, e.g., regarding response times and costs. However, the design space to be searched for well-suited solutions is very large. In this paper, we approach this optimization problem with the novel genetic algorithm CDOXplorer. It uses techniques of the search-based software engineering field and simulations with CDOSim to assess the fitness of CDOs. An experimental evaluation that employs, among others, the cloud environments Amazon EC2 and Microsoft Windows Azure, shows that CDOXplorer can find solutions that surpass those of other state-of-the-art techniques by up to 60\%. Our experiment code and data and an implementation of CDOXplorer are available as open source software

    CRAID: Online RAID upgrades using dynamic hot data reorganization

    Get PDF
    Current algorithms used to upgrade RAID arrays typically require large amounts of data to be migrated, even those that move only the minimum amount of data required to keep a balanced data load. This paper presents CRAID, a self-optimizing RAID array that performs an online block reorganization of frequently used, long-term accessed data in order to reduce this migration even further. To achieve this objective, CRAID tracks frequently used, long-term data blocks and copies them to a dedicated partition spread across all the disks in the array. When new disks are added, CRAID only needs to extend this process to the new devices to redistribute this partition, thus greatly reducing the overhead of the upgrade process. In addition, the reorganized access patterns within this partition improve the array’s performance, amortizing the copy overhead and allowing CRAID to offer a performance competitive with traditional RAIDs. We describe CRAID’s motivation and design and we evaluate it by replaying seven real-world workloads including a file server, a web server and a user share. Our experiments show that CRAID can successfully detect hot data variations and begin using new disks as soon as they are added to the array. Also, the usage of a dedicated partition improves the sequentiality of relevant data access, which amortizes the cost of reorganizations. Finally, we prove that a full-HDD CRAID array with a small distributed partition (<1.28% per disk) can compete in performance with an ideally restriped RAID-5 and a hybrid RAID-5 with a small SSD cache.Peer ReviewedPostprint (published version

    Explicit Building-Block Multiobjective Genetic Algorithms: Theory, Analysis, and Developing

    Get PDF
    This dissertation research emphasizes explicit Building Block (BB) based MO EAs performance and detailed symbolic representation. An explicit BB-based MOEA for solving constrained and real-world MOPs is developed the Multiobjective Messy Genetic Algorithm II (MOMGA-II) which is designed to validate symbolic BB concepts. The MOMGA-II demonstrates that explicit BB-based MOEAs provide insight into solving difficult MOPs that is generally not realized through the use of implicit BB-based MOEA approaches. This insight is necessary to increase the effectiveness of all MOEA approaches. In order to increase MOEA computational efficiency parallelization of MOEAs is addressed. Communications between processors in a parallel MOEA implementation is extremely important, hence innovative migration and replacement schemes for use in parallel MOEAs are detailed and tested. These parallel concepts support the development of the first explicit BB-based parallel MOEA the pMOMGA-II. MOEA theory is also advanced through the derivation of the first MOEA population sizing theory. The multiobjective population sizing theory presented derives the MOEA population size necessary in order to achieve good results within a specified level of confidence. Just as in the single objective approach the MOEA population sizing theory presents a very conservative sizing estimate. Validated results illustrate insight into building block phenomena good efficiency excellent effectiveness and motivation for future research in the area of explicit BB-based MOEAs. Thus the generic results of this research effort have applicability that aid in solving many different MOPs

    Power-Aware Planning and Design for Next Generation Wireless Networks

    Get PDF
    Mobile network operators have witnessed a transition from being voice dominated to video/data domination, which leads to a dramatic traffic growth over the past decade. With the 4G wireless communication systems being deployed in the world most recently, the fifth generation (5G) mobile and wireless communica- tion technologies are emerging into research fields. The fast growing data traffic volume and dramatic expansion of network infrastructures will inevitably trigger tremendous escalation of energy consumption in wireless networks, which will re- sult in the increase of greenhouse gas emission and pose ever increasing urgency on the environmental protection and sustainable network development. Thus, energy-efficiency is one of the most important rules that 5G network planning and design should follow. This dissertation presents power-aware planning and design for next generation wireless networks. We study network planning and design problems in both offline planning and online resource allocation. We propose approximation algo- rithms and effective heuristics for various network design scenarios, with different wireless network setups and different power saving optimization objectives. We aim to save power consumption on both base stations (BSs) and user equipments (UEs) by leveraging wireless relay placement, small cell deployment, device-to- device communications and base station consolidation. We first study a joint signal-aware relay station placement and power alloca- tion problem with consideration for multiple related physical constraints such as channel capacity, signal to noise ratio requirement of subscribers, relay power and network topology in multihop wireless relay networks. We present approximation schemes which first find a minimum number of relay stations, using maximum transmit power, to cover all the subscribers meeting each SNR requirement, and then ensure communications between any subscriber and a base station by ad- justing the transmit power of each relay station. In order to save power on BS, we propose a practical solution and offer a new perspective on implementing green wireless networks by embracing small cell networks. Many existing works have proposed to schedule base station into sleep to save energy. However, in reality, it is very difficult to shut down and reboot BSs frequently due to nu- merous technical issues and performance requirements. Instead of putting BSs into sleep, we tactically reduce the coverage of each base station, and strategi- cally place microcells to offload the traffic transmitted to/from BSs to save total power consumption. In online resource allocation, we aim to save tranmit power of UEs by en- abling device-to-device (D2D) communications in OFDMA-based wireless net- works. Most existing works on D2D communications either targeted CDMA- based single-channel networks or aimed at maximizing network throughput. We formally define an optimization problem based on a practical link data rate model, whose objective is to minimize total power consumption while meeting user data rate requirements. We propose to solve it using a joint optimization approach by presenting two effective and efficient algorithms, which both jointly determine mode selection, channel allocation and power assignment. In the last part of this dissertation, we propose to leverage load migration and base station consolidation for green communications and consider a power- efficient network planning problem in virtualized cognitive radio networks with the objective of minimizing total power consumption while meeting traffic load demand of each Mobile Virtual Network Operator (MVNO). First we present a Mixed Integer Linear Programming (MILP) to provide optimal solutions. Then we present a general optimization framework to guide algorithm design, which solves two subproblems, channel assignment and load allocation, in sequence. In addition, we present an effective heuristic algorithm that jointly solves the two subproblems. Numerical results are presented to confirm the theoretical analysis of our schemes, and to show strong performances of our solutions, compared to several baseline methods

    MACHS: Mitigating the Achilles Heel of the Cloud through High Availability and Performance-aware Solutions

    Get PDF
    Cloud computing is continuously growing as a business model for hosting information and communication technology applications. However, many concerns arise regarding the quality of service (QoS) offered by the cloud. One major challenge is the high availability (HA) of cloud-based applications. The key to achieving availability requirements is to develop an approach that is immune to cloud failures while minimizing the service level agreement (SLA) violations. To this end, this thesis addresses the HA of cloud-based applications from different perspectives. First, the thesis proposes a component’s HA-ware scheduler (CHASE) to manage the deployments of carrier-grade cloud applications while maximizing their HA and satisfying the QoS requirements. Second, a Stochastic Petri Net (SPN) model is proposed to capture the stochastic characteristics of cloud services and quantify the expected availability offered by an application deployment. The SPN model is then associated with an extensible policy-driven cloud scoring system that integrates other cloud challenges (i.e. green and cost concerns) with HA objectives. The proposed HA-aware solutions are extended to include a live virtual machine migration model that provides a trade-off between the migration time and the downtime while maintaining HA objective. Furthermore, the thesis proposes a generic input template for cloud simulators, GITS, to facilitate the creation of cloud scenarios while ensuring reusability, simplicity, and portability. Finally, an availability-aware CloudSim extension, ACE, is proposed. ACE extends CloudSim simulator with failure injection, computational paths, repair, failover, load balancing, and other availability-based modules
    corecore