2,680 research outputs found

    Preemptive scheduling on uniform parallel machines with controllable job processing times

    Get PDF
    In this paper, we provide a unified approach to solving preemptive scheduling problems with uniform parallel machines and controllable processing times. We demonstrate that a single criterion problem of minimizing total compression cost subject to the constraint that all due dates should be met can be formulated in terms of maximizing a linear function over a generalized polymatroid. This justifies applicability of the greedy approach and allows us to develop fast algorithms for solving the problem with arbitrary release and due dates as well as its special case with zero release dates and a common due date. For the bicriteria counterpart of the latter problem we develop an efficient algorithm that constructs the trade-off curve for minimizing the compression cost and the makespan

    Imprecise results: Utilizing partial computations in real-time systems

    Get PDF
    In real-time systems, a computation may not have time to complete its execution because of deadline requirements. In such cases, no result except the approximate results produced by the computations up to that point will be available. It is desirable to utilize these imprecise results if possible. Two approaches are proposed to enable computations to return imprecise results when executions cannot be completed normally. The milestone approach records results periodically, and if a deadline is reached, returns the last recorded result. The sieve approach demarcates sections of code which can be skipped if the time available is insufficient. By using these approaches, the system is able to produce imprecise results when deadlines are reached. The design of the Concord project is described which supports imprecise computations using these techniques. Also presented is a general model of imprecise computations using these techniques, as well as one which takes into account the influence of the environment, showing where the latter approach fits into this model

    Fuzzy Feedback Scheduling of Resource-Constrained Embedded Control Systems

    Full text link
    The quality of control (QoC) of a resource-constrained embedded control system may be jeopardized in dynamic environments with variable workload. This gives rise to the increasing demand of co-design of control and scheduling. To deal with uncertainties in resource availability, a fuzzy feedback scheduling (FFS) scheme is proposed in this paper. Within the framework of feedback scheduling, the sampling periods of control loops are dynamically adjusted using the fuzzy control technique. The feedback scheduler provides QoC guarantees in dynamic environments through maintaining the CPU utilization at a desired level. The framework and design methodology of the proposed FFS scheme are described in detail. A simplified mobile robot target tracking system is investigated as a case study to demonstrate the effectiveness of the proposed FFS scheme. The scheme is independent of task execution times, robust to measurement noises, and easy to implement, while incurring only a small overhead.Comment: To appear in International Journal of Innovative Computing, Information and Contro

    Scheduling data flow program in xkaapi: A new affinity based Algorithm for Heterogeneous Architectures

    Get PDF
    Efficient implementations of parallel applications on heterogeneous hybrid architectures require a careful balance between computations and communications with accelerator devices. Even if most of the communication time can be overlapped by computations, it is essential to reduce the total volume of communicated data. The literature therefore abounds with ad-hoc methods to reach that balance, but that are architecture and application dependent. We propose here a generic mechanism to automatically optimize the scheduling between CPUs and GPUs, and compare two strategies within this mechanism: the classical Heterogeneous Earliest Finish Time (HEFT) algorithm and our new, parametrized, Distributed Affinity Dual Approximation algorithm (DADA), which consists in grouping the tasks by affinity before running a fast dual approximation. We ran experiments on a heterogeneous parallel machine with six CPU cores and eight NVIDIA Fermi GPUs. Three standard dense linear algebra kernels from the PLASMA library have been ported on top of the Xkaapi runtime. We report their performances. It results that HEFT and DADA perform well for various experimental conditions, but that DADA performs better for larger systems and number of GPUs, and, in most cases, generates much lower data transfers than HEFT to achieve the same performance

    Genetic Algorithms in Real-Time Imprecise Computing

    Get PDF
    This article describes the use of genetic algorithms in real-time systems that employ the imprecise computation paradigm. In real-time systems, the focus is on ensuring that a set of tasks each complete within their deadlines. Faults may occur in the computation or the environment that can cause missed deadlines. That is why the idea of using partial results, when exact ones cannot be produced within the deadline, has been introduced. This idea has been formalized using the concepts of anytime algorithms and imprecise computation and specific techniques have been developed for designing programs which can produce partial results and for developing systems that can support imprecise computation techniques. Genetic algorithms are methods that can be, without any adaptation, used in an imprecise computation system. They produce a solution that bears a certain measure of reliability. During the process of their execution, this solution is constant ly improving. They can be used as a part of a real-time system, especially for optimizing tasks where the classical algorithms are not applicable or its computational time proves to be too expensive

    Applications of Soft Computing in Mobile and Wireless Communications

    Get PDF
    Soft computing is a synergistic combination of artificial intelligence methodologies to model and solve real world problems that are either impossible or too difficult to model mathematically. Furthermore, the use of conventional modeling techniques demands rigor, precision and certainty, which carry computational cost. On the other hand, soft computing utilizes computation, reasoning and inference to reduce computational cost by exploiting tolerance for imprecision, uncertainty, partial truth and approximation. In addition to computational cost savings, soft computing is an excellent platform for autonomic computing, owing to its roots in artificial intelligence. Wireless communication networks are associated with much uncertainty and imprecision due to a number of stochastic processes such as escalating number of access points, constantly changing propagation channels, sudden variations in network load and random mobility of users. This reality has fuelled numerous applications of soft computing techniques in mobile and wireless communications. This paper reviews various applications of the core soft computing methodologies in mobile and wireless communications

    MapReduce is Good Enough? If All You Have is a Hammer, Throw Away Everything That's Not a Nail!

    Full text link
    Hadoop is currently the large-scale data analysis "hammer" of choice, but there exist classes of algorithms that aren't "nails", in the sense that they are not particularly amenable to the MapReduce programming model. To address this, researchers have proposed MapReduce extensions or alternative programming models in which these algorithms can be elegantly expressed. This essay espouses a very different position: that MapReduce is "good enough", and that instead of trying to invent screwdrivers, we should simply get rid of everything that's not a nail. To be more specific, much discussion in the literature surrounds the fact that iterative algorithms are a poor fit for MapReduce: the simple solution is to find alternative non-iterative algorithms that solve the same problem. This essay captures my personal experiences as an academic researcher as well as a software engineer in a "real-world" production analytics environment. From this combined perspective I reflect on the current state and future of "big data" research
    corecore