3,171 research outputs found

    Adaptive Resource Scheduling for Energy Efficient QRD Processor with DVFS

    Get PDF
    This paper presents an energy efficient adaptive QR decomposition scheme for Long Term Evolution Advance (LTE-A) downlink system. The proposed scheme provides a performance robustness to fluctuating wireless channels while maintaining lower workload on a reconfigurable hardware. A statistic based algorithm-switching strategy is employed in the scheme to achieve workload reduction and stable computing resource requirement for QR decomposition. With run time resource allocation, computing resources are assigned to highest performance gain segments to reduce performance loss. By utilizing the dynamic voltage and frequency scaling (DVFS) technique, we further exploit the potential of power saving in various workload situation while maintaining fixed throughput. The proposed technique brings power reduction upto 57.8% in EVA-5 scenario and 24.4% with a maximum SNR loss of 1 dB in EVA-70 scenario, when mapped on a coarse grain reconfigurable vector-based platform

    A Survey of Techniques For Improving Energy Efficiency in Embedded Computing Systems

    Full text link
    Recent technological advances have greatly improved the performance and features of embedded systems. With the number of just mobile devices now reaching nearly equal to the population of earth, embedded systems have truly become ubiquitous. These trends, however, have also made the task of managing their power consumption extremely challenging. In recent years, several techniques have been proposed to address this issue. In this paper, we survey the techniques for managing power consumption of embedded systems. We discuss the need of power management and provide a classification of the techniques on several important parameters to highlight their similarities and differences. This paper is intended to help the researchers and application-developers in gaining insights into the working of power management techniques and designing even more efficient high-performance embedded systems of tomorrow

    A DVS system based on the trade-off between energy savings and execution time

    Get PDF
    DVS (Dynamic Voltage Scaling) is a technique used for reducing the power consumption of digital circuits. The power consumed by these circuits has a main component (dynamic power) that is proportional to the square of the supply voltage. Additionally, for every supply voltage, there is a maximum value of the clock frequency. The advantage of using DVS is that the supply voltage (and hence clock frequency) can be adjusted depending on the specific needs during execution. The DVS concept has been used in some commercial products like Transmeta’s Crusoe [1], Intel Speed Step [2], AMD K6 [3], Hitachi SH4 [4], etc. This paper presents results obtained by using a DVS algorithm based on the workload estimation and trade-off between the execution time and power savings. It is discussed about influence of the power supply’s slew rate, algorithms influence on the system performance and problems to estimate the processors workload. The DVS system is realized on Intel’s PXA255 platform and energy savings have been calculated by measuring directly voltages and currents on the platform

    Trade-off between Energy Savings and Execution Time Applying DVS to a Microprocessor

    Get PDF
    DVS (Dynamic Voltage Scaling) is a technique used for reducing the power consumption of microprocessors. The power consumed by these circuits has a main component (dynamic power) that is proportional to the square of the supply voltage. Additionally, for every supply voltage, there is a maximum value of the clock frequency. The advantage of using DVS is that the supply voltage (and hence clock frequency) can be adjusted depending on the specific needs during execution. The DVS concept has been used in some commercial products like Transmeta’s Crusoe [1], Intel Speed Step [2], AMD K6 [3], Hitachi SH4 [4], etc. The DVS algorithm proposed in this work is based on the trade-off between the application’s execution time and the energy consumed by the microprocessor. Indirectly, by controlling the execution time the consumed energy is controlled as well. Longer execution time provides less energy demanded by the CPU. The algorithm has been implemented on a platform with an Intel XScale PXA255 microprocessor and the energy saving has been calculated directly measuring currents and voltages on the platform. Using this technique it is possible to achieve up to 50% of power savings, with 50% longer execution time

    Iso-energy-efficiency: An approach to power-constrained parallel computation

    Get PDF
    Future large scale high performance supercomputer systems require high energy efficiency to achieve exaflops computational power and beyond. Despite the need to understand energy efficiency in high-performance systems, there are few techniques to evaluate energy efficiency at scale. In this paper, we propose a system-level iso-energy-efficiency model to analyze, evaluate and predict energy-performance of data intensive parallel applications with various execution patterns running on large scale power-aware clusters. Our analytical model can help users explore the effects of machine and application dependent characteristics on system energy efficiency and isolate efficient ways to scale system parameters (e.g. processor count, CPU power/frequency, workload size and network bandwidth) to balance energy use and performance. We derive our iso-energy-efficiency model and apply it to the NAS Parallel Benchmarks on two power-aware clusters. Our results indicate that the model accurately predicts total system energy consumption within 5% error on average for parallel applications with various execution and communication patterns. We demonstrate effective use of the model for various application contexts and in scalability decision-making

    Developing an energy efficient real-time system

    Get PDF
    Increasing number of battery operated devices creates a need for energy-efficient real-time operating system for such devices. Designing a truly energy-efficient system is a multi-staged effort; this thesis consists of three main tasks that address different aspects of energy efficiency of a real-time system (RTS). The first chapter introduces an energy-efficient algorithm that alternates processor frequency using DVFS to schedule tasks on cores. Speed profiles is calculated for every task that gives information about how long a task would run for and at what processor speed. We pair tasks with similar speed profiles to give us a resultant merged speed profile that can be efficient scheduled on a cluster. Experiments carried out on ODROID-XU3 are compared with a reference approach that provides energy saving of up to 20%. The second chapter proposes power-aware techniques to segregate a task set over a heterogeneous platform such that the overall energy consumption is minimized. With the help of calculated speed profiles, second contribution of this work feasibly partitions a given task set into individual sets for a cluster based homogeneous platform. Various heuristics are proposed that are compared against a baseline approach with simulation results. The final chapter of this thesis focuses on the importance of having an underlying energy-efficient operating system. We discuss an energy-efficient way of porting a real-time operating system (RTOS), QP, over TMS320F28377S along with modifications to make the Operating System (OS) consume minimal energy for its operation --Abstract, page iii
    • …
    corecore