4,159 research outputs found

    An Experimental Comparison of Speed Scaling Algorithms with Deadline Feasibility Constraints

    Get PDF
    We consider the first, and most well studied, speed scaling problem in the algorithmic literature: where the scheduling quality of service measure is a deadline feasibility constraint, and where the power objective is to minimize the total energy used. Four online algorithms for this problem have been proposed in the algorithmic literature. Based on the best upper bound that can be proved on the competitive ratio, the ranking of the online algorithms from best to worst is: qOAqOA, OAOA, AVRAVR, BKPBKP. As a test case on the effectiveness of competitive analysis to predict the best online algorithm, we report on an experimental ``horse race\u27\u27 between these algorithms using instances based on web server traces. Our main conclusion is that the ranking of our algorithms based on their performance in our experiments is identical to the order predicted by competitive analysis. This ranking holds over a large range of possible power functions, and even if the power objective is temperature

    Dynamic voltage scaling algorithms for soft and hard real-time system

    Get PDF
    Dynamic Voltage Scaling (DVS) has not been investigated completely for further minimizing the energy consumption of microprocessor and prolonging the operational life of real-time systems. In this dissertation, the workload prediction based DVS and the offline convex optimization based DVS for soft and hard real-time systems are investigated, respectively. The proposed algorithms of soft and hard real-time systems are implemented on a small scaled wireless sensor network (WSN) and a simulation model, respectively

    A Decomposition Algorithm for Nested Resource Allocation Problems

    Full text link
    We propose an exact polynomial algorithm for a resource allocation problem with convex costs and constraints on partial sums of resource consumptions, in the presence of either continuous or integer variables. No assumption of strict convexity or differentiability is needed. The method solves a hierarchy of resource allocation subproblems, whose solutions are used to convert constraints on sums of resources into bounds for separate variables at higher levels. The resulting time complexity for the integer problem is O(nlogmlog(B/n))O(n \log m \log (B/n)), and the complexity of obtaining an ϵ\epsilon-approximate solution for the continuous case is O(nlogmlog(B/ϵ))O(n \log m \log (B/\epsilon)), nn being the number of variables, mm the number of ascending constraints (such that m<nm < n), ϵ\epsilon a desired precision, and BB the total resource. This algorithm attains the best-known complexity when m=nm = n, and improves it when logm=o(logn)\log m = o(\log n). Extensive experimental analyses are conducted with four recent algorithms on various continuous problems issued from theory and practice. The proposed method achieves a higher performance than previous algorithms, addressing all problems with up to one million variables in less than one minute on a modern computer.Comment: Working Paper -- MIT, 23 page

    Predictive control using an FPGA with application to aircraft control

    Get PDF
    Alternative and more efficient computational methods can extend the applicability of MPC to systems with tight real-time requirements. This paper presents a “system-on-a-chip” MPC system, implemented on a field programmable gate array (FPGA), consisting of a sparse structure-exploiting primal dual interior point (PDIP) QP solver for MPC reference tracking and a fast gradient QP solver for steady-state target calculation. A parallel reduced precision iterative solver is used to accelerate the solution of the set of linear equations forming the computational bottleneck of the PDIP algorithm. A numerical study of the effect of reducing the number of iterations highlights the effectiveness of the approach. The system is demonstrated with an FPGA-inthe-loop testbench controlling a nonlinear simulation of a large airliner. This study considers many more manipulated inputs than any previous FPGA-based MPC implementation to date, yet the implementation comfortably fits into a mid-range FPGA, and the controller compares well in terms of solution quality and latency to state-of-the-art QP solvers running on a standard PC

    Task Scheduling on the Cloud with Hard Constraints

    Full text link
    Scheduling Bag-of-Tasks (BoT) applications on the cloud can be more challenging than grid and cluster environ- ments. This is because a user may have a budgetary constraint or a deadline for executing the BoT application in order to keep the overall execution costs low. The research in this paper is motivated to investigate task scheduling on the cloud, given two hard constraints based on a user-defined budget and a deadline. A heuristic algorithm is proposed and implemented to satisfy the hard constraints for executing the BoT application in a cost effective manner. The proposed algorithm is evaluated using four scenarios that are based on the trade-off between performance and the cost of using different cloud resource types. The experimental evaluation confirms the feasibility of the algorithm in satisfying the constraints. The key observation is that multiple resource types can be a better alternative to using a single type of resource.Comment: Visionary Track of the IEEE 11th World Congress on Services (IEEE SERVICES 2015

    The Thermal-Constrained Real-Time Systems Design on Multi-Core Platforms -- An Analytical Approach

    Get PDF
    Over the past decades, the shrinking transistor size enabled more transistors to be integrated into an IC chip, to achieve higher and higher computing performances. However, the semiconductor industry is now reaching a saturation point of Moore’s Law largely due to soaring power consumption and heat dissipation, among other factors. High chip temperature not only significantly increases packing/cooling cost, degrades system performance and reliability, but also increases the energy consumption and even damages the chip permanently. Although designing 2D and even 3D multi-core processors helps to lower the power/thermal barrier for single-core architectures by exploring the thread/process level parallelism, the higher power density and longer heat removal path has made the thermal problem substantially more challenging, surpassing the heat dissipation capability of traditional cooling mechanisms such as cooling fan, heat sink, heat spread, etc., in the design of new generations of computing systems. As a result, dynamic thermal management (DTM), i.e. to control the thermal behavior by dynamically varying computing performance and workload allocation on an IC chip, has been well-recognized as an effective strategy to deal with the thermal challenges. Over the past decades, the shrinking transistor size, benefited from the advancement of IC technology, enabled more transistors to be integrated into an IC chip, to achieve higher and higher computing performances. However, the semiconductor industry is now reaching a saturation point of Moore’s Law largely due to soaring power consumption and heat dissipation, among other factors. High chip temperature not only significantly increases packing/cooling cost, degrades system performance and reliability, but also increases the energy consumption and even damages the chip permanently. Although designing 2D and even 3D multi-core processors helps to lower the power/thermal barrier for single-core architectures by exploring the thread/process level parallelism, the higher power density and longer heat removal path has made the thermal problem substantially more challenging, surpassing the heat dissipation capability of traditional cooling mechanisms such as cooling fan, heat sink, heat spread, etc., in the design of new generations of computing systems. As a result, dynamic thermal management (DTM), i.e. to control the thermal behavior by dynamically varying computing performance and workload allocation on an IC chip, has been well-recognized as an effective strategy to deal with the thermal challenges. Different from many existing DTM heuristics that are based on simple intuitions, we seek to address the thermal problems through a rigorous analytical approach, to achieve the high predictability requirement in real-time system design. In this regard, we have made a number of important contributions. First, we develop a series of lemmas and theorems that are general enough to uncover the fundamental principles and characteristics with regard to the thermal model, peak temperature identification and peak temperature reduction, which are key to thermal-constrained real-time computer system design. Second, we develop a design-time frequency and voltage oscillating approach on multi-core platforms, which can greatly enhance the system throughput and its service capacity. Third, different from the traditional workload balancing approach, we develop a thermal-balancing approach that can substantially improve the energy efficiency and task partitioning feasibility, especially when the system utilization is high or with a tight temperature constraint. The significance of our research is that, not only can our proposed algorithms on throughput maximization and energy conservation outperform existing work significantly as demonstrated in our extensive experimental results, the theoretical results in our research are very general and can greatly benefit other thermal-related research

    Multi-agent Adaptive Architecture for Flexible Distributed Real-time Systems

    Get PDF
    Recent critical embedded systems become more and more complex and usually react to their environment that requires to amend their behaviors by applying run-time reconfiguration scenarios. A system is defined in this paper as a set of networked devices, where each of which has its own operating system, a processor to execute related periodic software tasks, and a local battery. A reconfiguration is any operation allowing the addition-removal-update of tasks to adapt the device and the whole system to its environment. It may be a reaction to a fault or even optimization of the system functional behavior. Nevertheless, such scenario can cause the violation of real-time or energy constraints, which is considered as a critical run-time problem. We propose a multi-agent adaptive architecture to handle dynamic reconfigurations and ensure the correct execution of the concurrent real-time distributed tasks under energy constraints. The proposed architecture integrates a centralized scheduler agent (ScA) which is the common decision making element for the scheduling problem. It is able to carry out the required run-time solutions based on operation research techniques and mathematical tools for the system's feasibility. This architecture assigns also a reconfiguration agent (RA p ) to each device p to control and handle the local reconfiguration scenarios under the instructions of ScA. A token-based protocol is defined in this case for the coordination between the different distributed agents in order to guarantee the whole system's feasibility under energy constraints.info:eu-repo/semantics/publishedVersio

    High Performance dynamic voltage/frequency scaling algorithm for real-time dynamic load management and code mobility

    Full text link
    Modern cyber-physical systems assume a complex and dynamic interaction between the real world and the computing system in real-time. In this context, changes in the physical environment trigger changes in the computational load to execute. On the other hand, task migration services offered by networked control systems require also management of dynamic real-time computing load in nodes. In such systems it would be difficult, if not impossible, to analyse off-line all the possible combinations of processor loads. For this reason, it is worthwhile attempting to define new flexible architectures that enable computing systems to adapt to potential changes in the environment. We assume a system composed by three main components: the first one is responsible of the management of the requests arisen when new tasks require to be executed. This management component asks to the second component about the resources available to accept the new tasks. The second component performs a feasibility analysis to determine if the new tasks can be accepted coping with its real-time constraints. A new processor speed is also computed. A third component monitors the execution of tasks applying a fixed priority scheduling policy and additionally controlling the frequency of the processor. This paper focus on the second component providing a "correct" (a task never is accepted if it is not schedulable) and "near-exact" (a task is rarely rejected if it is schedulable) algorithm that can be applicable in practice because its low/medium and predictable computational cost. The algorithm analyses task admission in terms of processor frequency scaling. The paper presents the details of a novel algorithm to analyse tasks admission and processor frequency assignment. Additionally, we perform several simulations to evaluate the comparative performance of the proposed approach. This evaluation is made in terms of energy consumption, task rejection ratios, and real computing costs. The results of simulations show that from the cost, execution predictability, and task acceptance points of view, the proposed algorithm mostly outperforms other constant voltage scaling algorithms. © 2011 Elsevier Inc. All rights reserved.This work has been supported by the Spanish Government as part of the SIDIRELI project (DPI2008-06737-C02-02), COBAMI project (DPI2011-28507-C02-02) and by the Generalitat Valenciana (Project ACOMP-2010-038).Coronel Parada, JO.; Simó Ten, JE. (2012). High Performance dynamic voltage/frequency scaling algorithm for real-time dynamic load management and code mobility. Journal of Systems and Software. 85(4):906-919. https://doi.org/10.1016/j.jss.2011.11.284S90691985

    Energy-Centric Scheduling for Real-Time Systems

    Get PDF
    Energy consumption is today an important design issue for all kinds of digital systems, and essential for the battery operated ones. An important fraction of this energy is dissipated on the processors running the application software. To reduce this energy consumption, one may, for instance, lower the processor clock frequency and supply voltage. This, however, might lead to a performance degradation of the whole system. In real-time systems, the crucial issue is timing, which is directly dependent on the system speed. Real-time scheduling and energy efficiency are therefore tightly connected issues, being addressed together in this work. Several scheduling approaches for low energy are described in the thesis, most targeting variable speed processor architectures. At task level, a novel speed scheduling algorithm for tasks with probabilistic execution pattern is introduced and compared to an already existing compile-time approach. For task graphs, a list-scheduling based algorithm with an energy-sensitive priority is proposed. For task sets, off-line methods for computing the task maximum required speeds are described, both for rate-monotonic and earliest deadline first scheduling. Also, a run-time speed optimization policy based on slack re-distribution is proposed for rate-monotonic scheduling. Next, an energy-efficient extension of the earliest deadline first priority assignment policy is proposed, aimed at tasks with probabilistic execution time. Finally, scheduling is examined in conjunction with assignment of tasks to processors, as parts of various low energy design flows. For some of the algorithms given in the thesis, energy measurements were carried out on a real hardware platform containing a variable speed processor. The results confirm the validity of the initial assumptions and models used throughout the thesis. These experiments also show the efficiency of the newly introduced scheduling methods
    corecore