15,276 research outputs found

    Experiments on Model-Based Software Energy Consumption Analysis Involving Sorting Algorithms

    Get PDF
    Although energy has become an important aspect in software development, little support exists for creating energy-efficient programs. One reason for that is the lack of abstractions and tools to enable the analysis of relevant properties involving energy consumption. This paper presents the results of some experiments involving the gathering, modelling, and analysis of energy-related information, in particular, the costs of executing certain parts of a software. We combine some existing free and open-source tools to carry out the experiments, extending one of them to handle energy information. Our experiments consider a comparison of energy consumption of Java implementations of the Bubble Sort, Insertion Sort and Selection Sort algorithms using different data structures. We show how to combine an energy measurement tool and a model analysis tool to carry such a comparison. Based on this support and on our experiments, we believe this is a first step to allow developers to start creating more energy-efficient software

    An Algorithm for Network and Data-aware Placement of Multi-Tier Applications in Cloud Data Centers

    Full text link
    Today's Cloud applications are dominated by composite applications comprising multiple computing and data components with strong communication correlations among them. Although Cloud providers are deploying large number of computing and storage devices to address the ever increasing demand for computing and storage resources, network resource demands are emerging as one of the key areas of performance bottleneck. This paper addresses network-aware placement of virtual components (computing and data) of multi-tier applications in data centers and formally defines the placement as an optimization problem. The simultaneous placement of Virtual Machines and data blocks aims at reducing the network overhead of the data center network infrastructure. A greedy heuristic is proposed for the on-demand application components placement that localizes network traffic in the data center interconnect. Such optimization helps reducing communication overhead in upper layer network switches that will eventually reduce the overall traffic volume across the data center. This, in turn, will help reducing packet transmission delay, increasing network performance, and minimizing the energy consumption of network components. Experimental results demonstrate performance superiority of the proposed algorithm over other approaches where it outperforms the state-of-the-art network-aware application placement algorithm across all performance metrics by reducing the average network cost up to 67% and network usage at core switches up to 84%, as well as increasing the average number of application deployments up to 18%.Comment: Submitted for publication consideration for the Journal of Network and Computer Applications (JNCA). Total page: 28. Number of figures: 15 figure

    A Novel Multiobjective Cell Switch-Off Framework for Cellular Networks

    Get PDF
    Cell Switch-Off (CSO) is recognized as a promising approach to reduce the energy consumption in next-generation cellular networks. However, CSO poses serious challenges not only from the resource allocation perspective but also from the implementation point of view. Indeed, CSO represents a difficult optimization problem due to its NP-complete nature. Moreover, there are a number of important practical limitations in the implementation of CSO schemes, such as the need for minimizing the real-time complexity and the number of on-off/off-on transitions and CSO-induced handovers. This article introduces a novel approach to CSO based on multiobjective optimization that makes use of the statistical description of the service demand (known by operators). In addition, downlink and uplink coverage criteria are included and a comparative analysis between different models to characterize intercell interference is also presented to shed light on their impact on CSO. The framework distinguishes itself from other proposals in two ways: 1) The number of on-off/off-on transitions as well as handovers are minimized, and 2) the computationally-heavy part of the algorithm is executed offline, which makes its implementation feasible. The results show that the proposed scheme achieves substantial energy savings in small cell deployments where service demand is not uniformly distributed, without compromising the Quality-of-Service (QoS) or requiring heavy real-time processing

    VIRTUALIZED BASEBAND UNITS CONSOLIDATION IN ADVANCED LTE NETWORKS USING MOBILITY- AND POWER-AWARE ALGORITHMS

    Get PDF
    Virtualization of baseband units in Advanced Long-Term Evolution networks and a rapid performance growth of general purpose processors naturally raise the interest in resource multiplexing. The concept of resource sharing and management between virtualized instances is not new and extensively used in data centers. We adopt some of the resource management techniques to organize virtualized baseband units on a pool of hosts and investigate the behavior of the system in order to identify features which are particularly relevant to mobile environment. Subsequently, we introduce our own resource management algorithm specifically targeted to address some of the peculiarities identified by experimental results

    Large-Scale Data Management and Analysis (LSDMA) - Big Data in Science

    Get PDF

    Algorithmic patterns for H\mathcal{H}-matrices on many-core processors

    Get PDF
    In this work, we consider the reformulation of hierarchical (H\mathcal{H}) matrix algorithms for many-core processors with a model implementation on graphics processing units (GPUs). H\mathcal{H} matrices approximate specific dense matrices, e.g., from discretized integral equations or kernel ridge regression, leading to log-linear time complexity in dense matrix-vector products. The parallelization of H\mathcal{H} matrix operations on many-core processors is difficult due to the complex nature of the underlying algorithms. While previous algorithmic advances for many-core hardware focused on accelerating existing H\mathcal{H} matrix CPU implementations by many-core processors, we here aim at totally relying on that processor type. As main contribution, we introduce the necessary parallel algorithmic patterns allowing to map the full H\mathcal{H} matrix construction and the fast matrix-vector product to many-core hardware. Here, crucial ingredients are space filling curves, parallel tree traversal and batching of linear algebra operations. The resulting model GPU implementation hmglib is the, to the best of the authors knowledge, first entirely GPU-based Open Source H\mathcal{H} matrix library of this kind. We conclude this work by an in-depth performance analysis and a comparative performance study against a standard H\mathcal{H} matrix library, highlighting profound speedups of our many-core parallel approach

    SlowFuzz: Automated Domain-Independent Detection of Algorithmic Complexity Vulnerabilities

    Full text link
    Algorithmic complexity vulnerabilities occur when the worst-case time/space complexity of an application is significantly higher than the respective average case for particular user-controlled inputs. When such conditions are met, an attacker can launch Denial-of-Service attacks against a vulnerable application by providing inputs that trigger the worst-case behavior. Such attacks have been known to have serious effects on production systems, take down entire websites, or lead to bypasses of Web Application Firewalls. Unfortunately, existing detection mechanisms for algorithmic complexity vulnerabilities are domain-specific and often require significant manual effort. In this paper, we design, implement, and evaluate SlowFuzz, a domain-independent framework for automatically finding algorithmic complexity vulnerabilities. SlowFuzz automatically finds inputs that trigger worst-case algorithmic behavior in the tested binary. SlowFuzz uses resource-usage-guided evolutionary search techniques to automatically find inputs that maximize computational resource utilization for a given application.Comment: ACM CCS '17, October 30-November 3, 2017, Dallas, TX, US
    • …
    corecore