1,926 research outputs found

    Scheduling real-time, periodic jobs using imprecise results

    Get PDF
    A process is called a monotone process if the accuracy of its intermediate results is non-decreasing as more time is spent to obtain the result. The result produced by a monotone process upon its normal termination is the desired result; the error in this result is zero. External events such as timeouts or crashes may cause the process to terminate prematurely. If the intermediate result produced by the process upon its premature termination is saved and made available, the application may still find the result unusable and, hence, acceptable; such a result is said to be an imprecise one. The error in an imprecise result is nonzero. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. This problem differs from the traditional scheduling problems since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result. Consequently, the amounts of processor time assigned to tasks in a valid schedule can be less than the amounts of time required to complete the tasks. A meaningful formulation of this problem taking into account the quality of the overall result is discussed. Three algorithms for scheduling jobs for which the effects of errors in results produced in different periods are not cumulative are described, and their relative merits are evaluated

    Scheduling periodic jobs using imprecise results

    Get PDF
    One approach to avoid timing faults in hard, real-time systems is to make available intermediate, imprecise results produced by real-time processes. When a result of the desired quality cannot be produced in time, an imprecise result of acceptable quality produced before the deadline can be used. The problem of scheduling periodic jobs to meet deadlines on a system that provides the necessary programming language primitives and run-time support for processes to return imprecise results is discussed. Since the scheduler may choose to terminate a task before it is completed, causing it to produce an acceptable but imprecise result, the amount of processor time assigned to any task in a valid schedule can be less than the amount of time required to complete the task. A meaningful formulation of the scheduling problem must take into account the overall quality of the results. Depending on the different types of undesirable effects caused by errors, jobs are classified as type N or type C. For type N jobs, the effects of errors in results produced in different periods are not cumulative. A reasonable performance measure is the average error over all jobs. Three heuristic algorithms that lead to feasible schedules with small average errors are described. For type C jobs, the undesirable effects of errors produced in different periods are cumulative. Schedulability criteria of type C jobs are discussed

    Using Imprecise Computing for Improved Real-Time Scheduling

    Get PDF
    Conventional hard real-time scheduling is often overly pessimistic due to the worst case execution time estimation. The pessimism can be mitigated by exploiting imprecise computing in applications where occasional small errors are acceptable. This leverage is investigated in a few previous works, which are restricted to preemptive cases. We study how to make use of imprecise computing in uniprocessor non-preemptive real-time scheduling, which is known to be more difficult than its preemptive counterpart. Several heuristic algorithms are developed for periodic tasks with independent or cumulative errors due to imprecision. Simulation results show that the proposed techniques can significantly improve task schedulability and achieve desired accuracy– schedulability tradeoff. The benefit of considering imprecise computing is further confirmed by a prototyping implementation in Linux system. Mixed-criticality system is a popular model for reducing pessimism in real-time scheduling while providing guarantee for critical tasks in presence of unexpected overrun. However, it is controversial due to some drawbacks. First, all low-criticality tasks are dropped in high-criticality mode, although they are still needed. Second, a single high-criticality job overrun leads to the pessimistic high-criticality mode for all high-criticality tasks and consequently resource utilization becomes inefficient. We attempt to tackle aforementioned two limitations of mixed-criticality system simultaneously in multiprocessor scheduling, while those two issues are mostly focused on uniprocessor scheduling in several recent works. We study how to achieve graceful degradation of low-criticality tasks by continuing their executions with imprecise computing or even precise computing if there is sufficient utilization slack. Schedulability conditions under this Variable-Precision Mixed-Criticality (VPMC) system model are investigated for partitioned scheduling and global fpEDF-VD scheduling. And a deferred switching protocol is introduced so that the chance of switching to high-criticality mode is significantly reduced. Moreover, we develop a precision optimization approach that maximizes precise computing of low-criticality tasks through 0-1 knapsack formulation. Experiments are performed through both software simulations and Linux proto- typing with consideration of overhead. Schedulability of the proposed methods is studied so that the Quality-of-Service for low-criticality tasks is improved with guarantee of satisfying all deadline constraints. The proposed precision optimization can largely reduce computing errors compared to constantly executing low-criticality tasks with imprecise computing in high-criticality mode

    Preemptive scheduling on uniform parallel machines with controllable job processing times

    Get PDF
    In this paper, we provide a unified approach to solving preemptive scheduling problems with uniform parallel machines and controllable processing times. We demonstrate that a single criterion problem of minimizing total compression cost subject to the constraint that all due dates should be met can be formulated in terms of maximizing a linear function over a generalized polymatroid. This justifies applicability of the greedy approach and allows us to develop fast algorithms for solving the problem with arbitrary release and due dates as well as its special case with zero release dates and a common due date. For the bicriteria counterpart of the latter problem we develop an efficient algorithm that constructs the trade-off curve for minimizing the compression cost and the makespan

    Flexible Scheduling Methods and Tools for Real-Time Control Systems

    Get PDF
    This thesis deals with flexibility in the design of real-time control systems. By dynamic resource scheduling it is possible to achieve on-line adaptability and increased control performance under resource constraints. The approach requires simulation tools for control and real-time systems co-design. One approach to achieve flexibility in the run-time scheduling of control tasks is feedback scheduling, where resources are scheduled dynamically based on measurements of actual timing variations and control performance. An overview of feedback scheduling techniques for control systems is presented.A flexible strategy for implementation of model predictive control (MPC) is described. In MPC, the control signal in each sample is obtained by the solution of a constrained quadratic optimization problem. A termination criterion is derived that, unlike traditional MPC, takes the effects of computational delay into account in the optimization. A scheduling scheme is also described, where the MPC cost functions being minimized are used as dynamic task priorities for a set of MPC tasks. The MATLAB/Simulink-based simulator TrueTime is presented. TrueTime is a co-design tool that facilitates simulation of distributed real-time control systems, where the execution of controller tasks in a real-time kernel is simulated in parallel with network transmissions and the continuous-time plant dynamics. Using TrueTime it is possible to study the effects of CPU and network scheduling on control performance and to experiment with flexible scheduling techniques and compensation schemes. A general overview of the simulator is given and the event-based kernel implementation is described.TrueTime is used in two simulation case studies. The first emulates TCP on top of standard Ethernet to simulate networked control of a robot system. The second case study uses TrueTime to simulate a web server application. A feedback scheduling strategy for QoS control in the web server is described

    Scheduling independent stochastic tasks under deadline and budget constraints

    Get PDF
    International audienceThis paper discusses scheduling strategies for the problem of maximizing the expected number of tasks that can be executed on a cloud platform within a given budget and under a deadline constraint. The execution times of tasks follow IID probability laws. The main questions are how many processors to enroll and whether and when to interrupt tasks that have been executing for some time. We provide complexity results and an asymptotically optimal strategy for the problem instance with discrete probability distributions and without deadline. We extend the latter strategy for the general case with continuous distributions and a deadline and we design an efficient heuristic which is shown to outperform standard approaches when running simulations for a variety of useful distribution laws

    Overlay networks monitoring

    Get PDF
    The phenomenal growth of the Internet and its entry into many aspects of daily life has led to a great dependency on its services. Multimedia and content distribution applications (e.g., video streaming, online gaming, VoIP) require Quality of Service (QoS) guarantees in terms of bandwidth, delay, loss, and jitter to maintain a certain level of performance. Moreover, E-commerce applications and retail websites are faced with increasing demand for better throughput and response time performance. The most practical way to realize such applications is through the use of overlay networks, which are logical networks that implement service and resource management functionalities at the application layer. Overlays offer better deployability, scalability, security, and resiliency properties than network layer based implementation of services. Network monitoring and routing are among the most important issues in the design and operation of overlay networks. Accurate monitoring of QoS parameters is a challenging problem due to: (i) unbounded link stress in the underlying IP network, and (ii) the conflict in measurements caused by spatial and temporal overlap among measurement tasks. In this context, the focus of this dissertation is on the design and evaluation of efficient QoS monitoring and fault location algorithms using overlay networks. First, the issue of monitoring accuracy provided by multiple concurrent active measurements is studied on a large-scale overlay test-bed (PlanetLab), the factors affecting the accuracy are identified, and the measurement conflict problem is introduced. Then, the problem of conducting conflict-free measurements is formulated as a scheduling problem of real-time tasks, its complexity is proven to be NP-hard, and efficient heuristic algorithms for the problem are proposed. Second, an algorithm for minimizing monitoring overhead while controlling the IP link stress is proposed. Finally, the use of overlay monitoring to locate IP links\u27 faults is investigated. Specifically, the problem of designing an overlay network for verifying the location of IP links\u27 faults, under cost and link stress constraints, is formulated as an integer generalized flow problem, and its complexity is proven to be NP-hard. An optimal polynomial time algorithm for the relaxed problem (relaxed link stress constraints) is proposed. A combination of simulation and experimental studies using real-life measurement tools and Internet topologies of major ISP networks is conducted to evaluate the proposed algorithms. The studies show that the proposed algorithms significantly improve the accuracy and link stress of overlay monitoring, while incurring low overheads. The evaluation of fault location algorithms show that fast and highly accurate verification of faults can be achieved using overlay monitoring. In conclusion, the holistic view taken and the solutions developed for network monitoring provide a comprehensive framework for the design, operation, and evolution of overlay networks

    Qualität und Nutzen - Über den Gebrauch von Zeit-Wert-Funktionen zur Integration qualitäts- und zeit-flexibler Aspekte in einer dynamischen Echtzeit-Einplanungsumgebung

    Get PDF
    Scheduling methodologies for real-time applications have been of keen interest to diverse research communities for several decades. Depending on the application area, algorithms have been developed that are tailored to specific requirements with respect to both the individual components of which an application is made up and the computational platform on which it is to be executed. Many real-time scheduling algorithms base their decisions solely or partly on timing constraints expressed by deadlines which must be met even under worst-case conditions. The increasing complexity of computing hardware means that worst-case execution time analysis becomes increasingly pessimistic. Scheduling hard real-time computations according to their worst-case execution times (which is common practice) will thus result, on average, in an increasing amount of spare capacity. The main goal of flexible real-time scheduling is to exploit this otherwise wasted capacity. Flexible scheduling schemes have been proposed to increase the ability of a real-time system to adapt to changing requirements and nondeterminism in the application behaviour. These models can be categorised as those whose source of flexibility is the quality of computations and those which are flexible regarding their timing constraints. This work describes a novel model which allows to specify both flexible timing constraints and quality profiles for an application. Furthermore, it demonstrates the applicability of this specification method to real-world examples and suggests a set of feasible scheduling algorithms for the proposed problem class.Einplanungsverfahren für Echtzeitanwendungen stehen seit Jahrzehnten im Interesse verschiedener Forschungsgruppen. Abhängig vom Anwendungsgebiet wurden Algorithmen entwickelt, welche an die spezifischen Anforderungen sowohl hinsichtlich der einzelnen Komponenten, aus welchen eine Anwendung besteht, als auch an die Rechnerplattform, auf der diese ausgeführt werden sollen, angepasst sind. Viele Echtzeit-Einplanungsverfahren gründen ihre Entscheidungen ausschließlich oder teilweise auf Zeitbedingungen, welche auch bei Auftreten maximaler Ausführungszeiten eingehalten werden müssen. Die zunehmende Komplexität von Rechner-Hardware bedeutet, dass die Worst-Case-Analyse in steigendem Maße pessimistisch wird. Die Einplanung harter Echtzeit-Berechnungen anhand ihrer maximalen Ausführungszeiten (was die gängige Praxis darstellt) resultiert daher im Regelfall in einer frei verfügbaren Rechenkapazität in steigender Höhe. Das Hauptziel flexibler Echtzeit-Einplanungsverfahren ist es, diese ansonsten verschwendete Kapazität auszunutzen. Flexible Einplanungsverfahren wurden vorgeschlagen, welche die Fähigkeit eines Echtzeitsystems erhöhen, sich an veränderte Anforderungen und Nichtdeterminismus im Verhalten der Anwendung anzupassen. Diese Modelle können unterteilt werden in solche, deren Quelle der Flexibilität die Qualität der Berechnungen ist, und jene, welche flexibel hinsichtlich ihrer Zeitbedingungen sind. Diese Arbeit beschreibt ein neuartiges Modell, welches es erlaubt, sowohl flexible Zeitbedingungen als auch Qualitätsprofile für eine Anwendung anzugeben. Außerdem demonstriert sie die Anwendbarkeit dieser Spezifikationsmethode auf reale Beispiele und schlägt eine Reihe von Einplanungsalgorithmen für die vorgestellte Problemklasse vor

    Task assignment in home health care : a fuzzy group genetic algorithm approach

    Get PDF
    The assignment of home care tasks to nursing staff is a complex problem for decision makers concerned with optimizing home healthcare operations scheduling and logistics. Motivated by the ever-increasing home-based care needs, the design of high quality task assignments is highly essential for maintaining or improving worker moral, job satisfaction, service efficiency, service quality, and to ensure that business competitiveness remains momentous. To achieve high quality task assignments, the assigned workloads should be balanced or fair among the care givers. Therefore, the desired goal is to balance the workload of care givers while avoiding long distance travels in visiting the patients. However, the desired goal is often subjective as it involves the care givers, the management, and the patients. As such, the goal tends to be imprecise in the real world. This paper develops a fuzzy group genetic algorithm (FGGA) for task assignment in home healthcare services. The FGGA approach uses fuzzy evaluation based on fuzzy set theory. Results from illustrative examples show that the approach is promising
    corecore