80,641 research outputs found

    Block level voltage

    Get PDF
    Over the past years, state-of-art power optimization methods move towards higher abstraction levels that result in more efficient power savings. Among existing power optimization approaches, dynamic power management (DPM) is considered to be one of the most effective strategies. Depending on abstraction levels, DPM can be implemented in different formats but here we focus on scheduling that is more suitable for real-time system design use. This differs from the concurrent scheduling approaches that start from either the HLS (High-Level Synthesis) or RTS (Real-Time System) point of view, we propose a synergy solution of both approaches, namely block-level voltage/frequency scheduling (BLVFS). The presented block-level voltage/ frequency scheduling approach shows a generic solution for low power SoC (System on Chip) system design while the approaches which belong to the HLS and RTS categories have a strong dependency on the system functionalities. Consider a SoC as a combination of heterogeneous functional blocks, our approach provides efficient power savings by dynamically scheduling the scaling of voltage and frequency at the same time. Simulation results indicate that by using heuristic based strategies significant power savings can be achieved

    A Multi-objective Perspective for Operator Scheduling using Fine-grained DVS Architecture

    Full text link
    The stringent power budget of fine grained power managed digital integrated circuits have driven chip designers to optimize power at the cost of area and delay, which were the traditional cost criteria for circuit optimization. The emerging scenario motivates us to revisit the classical operator scheduling problem under the availability of DVFS enabled functional units that can trade-off cycles with power. We study the design space defined due to this trade-off and present a branch-and-bound(B/B) algorithm to explore this state space and report the pareto-optimal front with respect to area and power. The scheduling also aims at maximum resource sharing and is able to attain sufficient area and power gains for complex benchmarks when timing constraints are relaxed by sufficient amount. Experimental results show that the algorithm that operates without any user constraint(area/power) is able to solve the problem for most available benchmarks, and the use of power budget or area budget constraints leads to significant performance gain.Comment: 18 pages, 6 figures, International journal of VLSI design & Communication Systems (VLSICS

    Online Scheduled Execution of Quantum Circuits Protected by Surface Codes

    Full text link
    Quantum circuits are the preferred formalism for expressing quantum information processing tasks. Quantum circuit design automation methods mostly use a waterfall approach and consider that high level circuit descriptions are hardware agnostic. This assumption has lead to a static circuit perspective: the number of quantum bits and quantum gates is determined before circuit execution and everything is considered reliable with zero probability of failure. Many different schemes for achieving reliable fault-tolerant quantum computation exist, with different schemes suitable for different architectures. A number of large experimental groups are developing architectures well suited to being protected by surface quantum error correcting codes. Such circuits could include unreliable logical elements, such as state distillation, whose failure can be determined only after their actual execution. Therefore, practical logical circuits, as envisaged by many groups, are likely to have a dynamic structure. This requires an online scheduling of their execution: one knows for sure what needs to be executed only after previous elements have finished executing. This work shows that scheduling shares similarities with place and route methods. The work also introduces the first online schedulers of quantum circuits protected by surface codes. The work also highlights scheduling efficiency by comparing the new methods with state of the art static scheduling of surface code protected fault-tolerant circuits.Comment: accepted in QI

    Formal and Informal Methods for Multi-Core Design Space Exploration

    Full text link
    We propose a tool-supported methodology for design-space exploration for embedded systems. It provides means to define high-level models of applications and multi-processor architectures and evaluate the performance of different deployment (mapping, scheduling) strategies while taking uncertainty into account. We argue that this extension of the scope of formal verification is important for the viability of the domain.Comment: In Proceedings QAPL 2014, arXiv:1406.156

    Optimizing construction of scheduled data flow graph for on-line testability

    Get PDF
    The objective of this work is to develop a new methodology for behavioural synthesis using a flow of synthesis, better suited to the scheduling of independent calculations and non-concurrent online testing. The traditional behavioural synthesis process can be defined as the compilation of an algorithmic specification into an architecture composed of a data path and a controller. This stream of synthesis generally involves scheduling, resource allocation, generation of the data path and controller synthesis. Experiments showed that optimization started at the high level synthesis improves the performance of the result, yet the current tools do not offer synthesis optimizations that from the RTL level. This justifies the development of an optimization methodology which takes effect from the behavioural specification and accompanying the synthesis process in its various stages. In this paper we propose the use of algebraic properties (commutativity, associativity and distributivity) to transform readable mathematical formulas of algorithmic specifications into mathematical formulas evaluated efficiently. This will effectively reduce the execution time of scheduling calculations and increase the possibilities of testability

    A Novel SAT-Based Approach to the Task Graph Cost-Optimal Scheduling Problem

    Get PDF
    The Task Graph Cost-Optimal Scheduling Problem consists in scheduling a certain number of interdependent tasks onto a set of heterogeneous processors (characterized by idle and running rates per time unit), minimizing the cost of the entire process. This paper provides a novel formulation for this scheduling puzzle, in which an optimal solution is computed through a sequence of Binate Covering Problems, hinged within a Bounded Model Checking paradigm. In this approach, each covering instance, providing a min-cost trace for a given schedule depth, can be solved with several strategies, resorting to Minimum-Cost Satisfiability solvers or Pseudo-Boolean Optimization tools. Unfortunately, all direct resolution methods show very low efficiency and scalability. As a consequence, we introduce a specialized method to solve the same sequence of problems, based on a traditional all-solution SAT solver. This approach follows the "circuit cofactoring" strategy, as it exploits a powerful technique to capture a large set of solutions for any new SAT counter-example. The overall method is completed with a branch-and-bound heuristic which evaluates lower and upper bounds of the schedule length, to reduce the state space that has to be visited. Our results show that the proposed strategy significantly improves the blind binate covering schema, and it outperforms general purpose state-of-the-art tool

    Business Process Redesign in the Perioperative Process: A Case Perspective for Digital Transformation

    Get PDF
    This case study investigates business process redesign within the perioperative process as a method to achieve digital transformation. Specific perioperative sub-processes are targeted for re-design and digitalization, which yield improvement. Based on a 184-month longitudinal study of a large 1,157 registered-bed academic medical center, the observed effects are viewed through a lens of information technology (IT) impact on core capabilities and core strategy to yield a digital transformation framework that supports patient-centric improvement across perioperative sub-processes. This research identifies existing limitations, potential capabilities, and subsequent contextual understanding to minimize perioperative process complexity, target opportunity for improvement, and ultimately yield improved capabilities. Dynamic technological activities of analysis, evaluation, and synthesis applied to specific perioperative patient-centric data collected within integrated hospital information systems yield the organizational resource for process management and control. Conclusions include theoretical and practical implications as well as study limitations
    corecore