5,396 research outputs found

    A Multi-objective Perspective for Operator Scheduling using Fine-grained DVS Architecture

    Full text link
    The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for most available benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.Comment: 18 pages, 6 figures, International journal of VLSI design & Communication Systems (VLSICS

    Timed Consistent Network Updates

    Full text link
    Network updates such as policy and routing changes occur frequently in Software Defined Networks (SDN). Updates should be performed consistently, preventing temporary disruptions, and should require as little overhead as possible. Scalability is increasingly becoming an essential requirement in SDN. In this paper we propose to use time-triggered network updates to achieve consistent updates. Our proposed solution requires lower overhead than existing update approaches, without compromising the consistency during the update. We demonstrate that accurate time enables far more scalable consistent updates in SDN than previously available. In addition, it provides the SDN programmer with fine-grained control over the tradeoff between consistency and scalability.Comment: This technical report is an extended version of the paper "Timed Consistent Network Updates", which was accepted to the ACM SIGCOMM Symposium on SDN Research (SOSR) '15, Santa Clara, CA, US, June 201

    Energy-Centric Scheduling for Real-Time Systems

    Get PDF
    Energy consumption is today an important design issue for all kinds of digital systems, and essential for the battery operated ones. An important fraction of this energy is dissipated on the processors running the application software. To reduce this energy consumption, one may, for instance, lower the processor clock frequency and supply voltage. This, however, might lead to a performance degradation of the whole system. In real-time systems, the crucial issue is timing, which is directly dependent on the system speed. Real-time scheduling and energy efficiency are therefore tightly connected issues, being addressed together in this work. Several scheduling approaches for low energy are described in the thesis, most targeting variable speed processor architectures. At task level, a novel speed scheduling algorithm for tasks with probabilistic execution pattern is introduced and compared to an already existing compile-time approach. For task graphs, a list-scheduling based algorithm with an energy-sensitive priority is proposed. For task sets, off-line methods for computing the task maximum required speeds are described, both for rate-monotonic and earliest deadline first scheduling. Also, a run-time speed optimization policy based on slack re-distribution is proposed for rate-monotonic scheduling. Next, an energy-efficient extension of the earliest deadline first priority assignment policy is proposed, aimed at tasks with probabilistic execution time. Finally, scheduling is examined in conjunction with assignment of tasks to processors, as parts of various low energy design flows. For some of the algorithms given in the thesis, energy measurements were carried out on a real hardware platform containing a variable speed processor. The results confirm the validity of the initial assumptions and models used throughout the thesis. These experiments also show the efficiency of the newly introduced scheduling methods

    Optimal Selection of Supply Voltages and Level Conversions During Low Power Data Path Scheduling

    Get PDF
    In t\u27his paper we will consider how to select an optimal set of supply voltages and account for level conversion costs when optimizing the schedule of a resource domina.ted data path for minimum average power dissipation. Integer linear program (ILP) and non-linear program (NLP) formulations are presented for a minimum power schedule under latency and throughput constraints. Results are presented for several data path topologies under minimum latency constraints and under more relaxed latency constraints. The optimization demonstrated substantial benefit going from one to two supply voltages, but minimal additi,onal benefit from any additional supplies. For example, a Kalman filter benchmark produced a power estimate of 356.7mW for a single 5V supply, 265.4mW for 4V and 5V supplies, but no additional improvement for three supplies. Increasing minimum schedule latency by 50% improved optimization results substantially for two and three supply voltages but in mod cases there was no improvement at all for a single optimal supply voltage

    Channel and active component abstractions for WSN programming - a language model with operating system support

    Get PDF
    To support the programming of Wireless Sensor Networks, a number of unconventional programming models have evolved, in particular the event-based model. These models are non-intuitive to programmers due to the introduction of unnecessary, non-intrinsic complexity. Component-based languages like Insense can eliminate much of this unnecessary complexity via the use of active components and synchronous channels. However, simply layering an Insense implementation over an existing event-based system, like TinyOS, while proving efficacy, is insufficiently space and time efficient for production use. The design and implementation of a new language-specific OS, InceOS, enables both space and time efficient programming of sensor networks using component-based languages like Insense

    Composability and Predictability for Independent Application Development, Verification and Execution

    Get PDF
    System-on-chip (SOC) design gets increasingly complex, as a growing number of applications are integrated in modern systems. Some of these applications have real-time requirements, such as a minimum throughput or a maximum latency. To reduce cost, system resources are shared between applications, making their timing behavior inter-dependent. Real-time requirements must hence be verified for all possible combinations of concurrently executing applications, which is not feasible with commonly used simulation-based techniques. This chapter addresses this problem using two complexity-reducing concepts: composability and predictability. Applications in a composable system are completely isolated and cannot affect each other’s behaviors, enabling them to be independently verified. Predictable systems, on the other hand, provide lower bounds on performance, allowing applications to be verified using formal performance analysis. Five techniques to achieve composability and/or predictability in SOC resources are presented and we explain their implementation for processors, interconnect, and memories in our platform
    corecore