3,562 research outputs found

    Statistic Rate Monotonic Scheduling

    Full text link
    In this paper we present Statistical Rate Monotonic Scheduling (SRMS), a generalization of the classical RMS results of Liu and Layland that allows scheduling periodic tasks with highly variable execution times and statistical QoS requirements. Similar to RMS, SRMS has two components: a feasibility test and a scheduling algorithm. The feasibility test for SRMS ensures that using SRMS' scheduling algorithms, it is possible for a given periodic task set to share a given resource (e.g. a processor, communication medium, switching device, etc.) in such a way that such sharing does not result in the violation of any of the periodic tasks QoS constraints. The SRMS scheduling algorithm incorporates a number of unique features. First, it allows for fixed priority scheduling that keeps the tasks' value (or importance) independent of their periods. Second, it allows for job admission control, which allows the rejection of jobs that are not guaranteed to finish by their deadlines as soon as they are released, thus enabling the system to take necessary compensating actions. Also, admission control allows the preservation of resources since no time is spent on jobs that will miss their deadlines anyway. Third, SRMS integrates reservation-based and best-effort resource scheduling seamlessly. Reservation-based scheduling ensures the delivery of the minimal requested QoS; best-effort scheduling ensures that unused, reserved bandwidth is not wasted, but rather used to improve QoS further. Fourth, SRMS allows a system to deal gracefully with overload conditions by ensuring a fair deterioration in QoS across all tasks---as opposed to penalizing tasks with longer periods, for example. Finally, SRMS has the added advantage that its schedulability test is simple and its scheduling algorithm has a constant overhead in the sense that the complexity of the scheduler is not dependent on the number of the tasks in the system. We have evaluated SRMS against a number of alternative scheduling algorithms suggested in the literature (e.g. RMS and slack stealing), as well as refinements thereof, which we describe in this paper. Consistently throughout our experiments, SRMS provided the best performance. In addition, to evaluate the optimality of SRMS, we have compared it to an inefficient, yet optimal scheduler for task sets with harmonic periods.National Science Foundation (CCR-970668

    Centralized vs distributed communication scheme on switched ethernet for embedded military applications

    Get PDF
    Current military communication network is a generation old and is no longer effective in meeting the emerging requirements imposed by the future embedded military applications. Therefore, a new interconnection system is needed to overcome these limitations. Two new communication networks based upon Full Duplex Switched Ethernet are presented herein in this aim. The first one uses a distributed communication scheme where equipments can emit their data simultaneously, which clearly improves system’s throughput and flexibility. However, migrating all existing applications into a compliant form could be an expensive step. To avoid this process, the second proposal consists in keeping the current centralized communication scheme. Our objective is to assess and compare the real time guarantees that each proposal can offer. The paper includes the functional description of each proposed communication network and a military avionic application to highlight proposals ability to support the required time constrained communications

    Capacity Planning and Leadtime management

    Get PDF
    In this paper we discuss a framework for capacity planning and lead time management in manufacturing companies, with an emphasis on the machine shop. First we show how queueing models can be used to find approximations of the mean and the variance of manufacturing shop lead times. These quantities often serve as a basis to set a fixed planned lead time in an MRP-controlled environment. A major drawback of a fixed planned lead time is the ignorance of the correlation between actual work loads and the lead times that can be realized under a limited capacity flexibility. To overcome this problem, we develop a method that determines the earliest possible completion time of any arriving job, without sacrificing the delivery performance of any other job in the shop. This earliest completion time is then taken to be the delivery date and thereby determines a workload-dependent planned lead time. We compare this capacity planning procedure with a fixed planned lead time approach (as in MRP), with a procedure in which lead times are estimated based on the amount of work in the shop, and with a workload-oriented release procedure. Numerical experiments so far show an excellent performance of the capacity planning procedure

    Climate Policy under Sustainable Discounted Utilitarianism

    Get PDF
    Empirical evaluation of policies to mitigate climate change has been largely confined to the application of discounted utilitarianism (DU). DU is controversial, both due to the conditions through which it is justified and due to its consequences for climate policies, where the discounting of future utility gains from present abatement efforts makes it harder for such measures to justify their present costs. In this paper, we propose sustainable discounted utilitarianism (SDU) as an alternative principle for evaluation of climate policy. Unlike undiscounted utilitarianism, which always assigns zero relative weight to present utility, SDU is an axiomatically based criterion, which departs from DU by assigning zero weight to present utility if and only if the present is better off than the future. Using the DICE integrated assessment model to run risk analysis, we show that it is possible for the future to be worse off than the present along a ‘business as usual’ development path. Consequently SDU and DU differ, and willingness to pay for emissions reductions is (sometimes significantly) higher under SDU than under DU. Under SDU, stringent schedules of emissions reductions increase social welfare, even for a relatively high utility discount rate.climate change, discounted utilitarianism, intergenerational equity, sustainable development, sustainable discounted utilitarianism

    DRS: Dynamic Resource Scheduling for Real-Time Analytics over Fast Streams

    Full text link
    In a data stream management system (DSMS), users register continuous queries, and receive result updates as data arrive and expire. We focus on applications with real-time constraints, in which the user must receive each result update within a given period after the update occurs. To handle fast data, the DSMS is commonly placed on top of a cloud infrastructure. Because stream properties such as arrival rates can fluctuate unpredictably, cloud resources must be dynamically provisioned and scheduled accordingly to ensure real-time response. It is quite essential, for the existing systems or future developments, to possess the ability of scheduling resources dynamically according to the current workload, in order to avoid wasting resources, or failing in delivering correct results on time. Motivated by this, we propose DRS, a novel dynamic resource scheduler for cloud-based DSMSs. DRS overcomes three fundamental challenges: (a) how to model the relationship between the provisioned resources and query response time (b) where to best place resources; and (c) how to measure system load with minimal overhead. In particular, DRS includes an accurate performance model based on the theory of \emph{Jackson open queueing networks} and is capable of handling \emph{arbitrary} operator topologies, possibly with loops, splits and joins. Extensive experiments with real data confirm that DRS achieves real-time response with close to optimal resource consumption.Comment: This is the our latest version with certain modificatio

    Video Smoothing of Aggregates of Streams with Bandwidth Constraints

    Get PDF
    Compressed variable bit rate (VBR) video transmission is acquiring a growing importance in the telecommunication world. High data rate variability of compressed video over multiple time scales makes an efficient bandwidth resource utilization difficult to obtain. One of the approaches developed to face this problem are smoothing techniques. Various smoothing algorithms that exploit client buffers have been proposed, thus reducing the peak rate and high rate variability by efficiently scheduling the video data to be transmitted over the network. The novel smoothing algorithm proposed in this paper, which represents a significant improvements over the existing methods, performs data scheduling both for a single stream and for stream aggregations, by taking into account available bandwidth constraints. It modifies, whenever possible, the smoothing schedule in such a way as to eliminate frame losses due to available bandwidth limitations. This technique can be applied to any smoothing algorithm already present in literature and can be usefully exploited to minimize losses in multiplexed stream scenarios, like Terrestrial Digital Video Broadcasting (DVB-T), where a specific known available bandwidth must be shared by several multimedia flows. The developed algorithm has been exploited for smoothing stored video, although it can also be quite easily adapted for real time smoothing. The obtained numerical results, compared with the MVBA, another smoothing algorithm that is already presented and discussed in literature, show the effectiveness of the proposed algorithm, in terms of lost video frames, for different multiplexed scenarios

    Parallel R&D Paths Revisited

    Get PDF
    This paper revisits the logic of pursuing parallel R&D paths when there is uncertainty as to which approaches will succeed technically and/or economically. Previous findings by Richard Nelson and the present author are reviewed. A further analysis then seeks to determine how sensitive optimal strategies are to parameter variations and the extent to which parallel and series strategies are integrated. It pays to support more approaches, the deeper the stream of benefits is and the lower is the probability of success with a single approach. Higher profits are obtained with combinations of parallel and series strategies, but the differences are small when the number of series trial periods is extended from two to larger numbers. A "dartboard experiment" shows that when uncertainty pertains mainly to outcome values and the distribution of values is skew-distributed, the optimal number of trials is inversely related to the cost per trial.

    Extending Complex Event Processing for Advanced Applications

    Get PDF
    Recently numerous emerging applications, ranging from on-line financial transactions, RFID based supply chain management, traffic monitoring to real-time object monitoring, generate high-volume event streams. To meet the needs of processing event data streams in real-time, Complex Event Processing technology (CEP) has been developed with the focus on detecting occurrences of particular composite patterns of events. By analyzing and constructing several real-world CEP applications, we found that CEP needs to be extended with advanced services beyond detecting pattern queries. We summarize these emerging needs in three orthogonal directions. First, for applications which require access to both streaming and stored data, we need to provide a clear semantics and efficient schedulers in the face of concurrent access and failures. Second, when a CEP system is deployed in a sensitive environment such as health care, we wish to mitigate possible privacy leaks. Third, when input events do not carry the identification of the object being monitored, we need to infer the probabilistic identification of events before feed them to a CEP engine. Therefore this dissertation discusses the construction of a framework for extending CEP to support these critical services. First, existing CEP technology is limited in its capability of reacting to opportunities and risks detected by pattern queries. We propose to tackle this unsolved problem by embedding active rule support within the CEP engine. The main challenge is to handle interactions between queries and reactions to queries in the high-volume stream execution. We hence introduce a novel stream-oriented transactional model along with a family of stream transaction scheduling algorithms that ensure the correctness of concurrent stream execution. And then we demonstrate the proposed technology by applying it to a real-world healthcare system and evaluate the stream transaction scheduling algorithms extensively using real-world workload. Second, we are the first to study the privacy implications of CEP systems. Specifically we consider how to suppress events on a stream to reduce the disclosure of sensitive patterns, while ensuring that nonsensitive patterns continue to be reported by the CEP engine. We formally define the problem of utility-maximizing event suppression for privacy preservation. We then design a suite of real-time solutions that eliminate private pattern matches while maximizing the overall utility. Our first solution optimally solves the problem at the event-type level. The second solution, at event-instance level, further optimizes the event-type level solution by exploiting runtime event distributions using advanced pattern match cardinality estimation techniques. Our experimental evaluation over both real-world and synthetic event streams shows that our algorithms are effective in maximizing utility yet still efficient enough to offer near real time system responsiveness. Third, we observe that in many real-world object monitoring applications where the CEP technology is adopted, not all sensed events carry the identification of the object whose action they report on, so called €œnon-ID-ed€� events. Such non-ID-ed events prevent us from performing object-based analytics, such as tracking, alerting and pattern matching. We propose a probabilistic inference framework to tackle this problem by inferring the missing object identification associated with an event. Specifically, as a foundation we design a time-varying graphic model to capture correspondences between sensed events and objects. Upon this model, we elaborate how to adapt the state-of-the-art Forward-backward inference algorithm to continuously infer probabilistic identifications for non-ID-ed events. More important, we propose a suite of strategies for optimizing the performance of inference. Our experimental results, using large-volume streams of a real-world health care application, demonstrate the accuracy, efficiency, and scalability of the proposed technology
    corecore