416 research outputs found

    Batch Arrival Preventive Loss Priority Queues with Preventive Distance

    Get PDF
    This paper is concerned with preemptive loss priority queues in which a batch of failed machines of each priority class arrive in a Poisson process and have general service time distribution. In this queuing system, failed machines are not considered for repair again when their services are preempted by the arrival of another batch of failed machines with higher priority They disappear immediately. A case can be modeled by such a system in which deferred service is worthless for old demands of low priority. This model is based on the situation of strict preemption with preemption distance parameter d such that failures of only class l to p - d can preempt the service of failures of class p. The closed form expressions are obtained in the mean waiting time and source time from their distributions for each class. Several numerical examples illustrate the approach

    A Survey on Communication Networks in Emergency Warning Systems

    Get PDF

    Real-time Query Scheduling for Wireless Sensor Networks

    Get PDF
    Recent years have seen the emergence of wireless sensor network (WSN) systems that require high data rate real-time communication. This paper proposes Real-Time Query Scheduling (RTQS), a novel approach to conflict-free transmission scheduling for real-time queries in WSNs. We show that there is an inherent trade-off between prioritization and throughput in conflict-free query scheduling. RTQS provides three new real-time scheduling algorithms. The non-preemptive query scheduling algorithm achieves high throughput while introducing priority inversions. The preemptive query scheduling algorithm eliminates priority inversion at the cost of reduced throughput. The slack stealing query scheduling algorithm combines the benefits of preemptive and non-preemptive scheduling by improving the throughput while meeting query deadlines. We provide schedulability analysis for each scheduling algorithm. The analysis and advantages of our scheduling algorithms are validated through NS2 simulations

    Priority queues

    Get PDF
    Extensive research has been carried out in the subject of Priority Queues over the past ten years, culminating in the book by Jaiswal [8], in this thesis, certain isolated problems which appear to have been omitted from the consideration of other authors are discussed. The first two chapters are concerned with the question of how priorities should be allocated to customers (or 'units') arriving at a queue so as to minimize the overall meaning waiting time [it is perhaps worth mentioning at the outset that following current usage, the terms 'queueing time' and ‘waiting time' will be used synonymously throughout; both refer to the time a unit waits before commencing service]. In previous treatments of this 'allocation of priorities problem it has always been assumed that on arrival, the service time requirement of a unit could be predicted exactly; the effect of having only imperfect information in the form of an estimated service time is considered here. Chapter l deals with the non-pre-emptive discipline; Chapter 2 with discretionary disciplines. Priority queues in which the arrival epochs of different types of units form independent renewal processes have only been solved under the assumption of random arrivals. However, if the following modified arrival scheme is considered. arrival epochs form an ordinary renewal process, and at any arrival epoch, independently of what happened at all previous epochs, with probability q1 the arrival is a priority unit and with probability q2 a non=priority unit (where ql+q2 =l) then the priority analogues of the ordinary single-server queues E(_b)/G/l and GI/M/1 can be solved (Chapters 3 and 4 respectively)" In conclusion, Chapter 5 is concerned with approximate methods: (v) section 1 is a review of previous work on deriving bounds for the mean waiting time in a GI/G/1 queue, section 2 extends this work to the GI/G/1 priority queue

    Composition and synchronization of real-time components upon one processor

    Get PDF
    Many industrial systems have various hardware and software functions for controlling mechanics. If these functions act independently, as they do in legacy situations, their overall performance is not optimal. There is a trend towards optimizing the overall system performance and creating a synergy between the different functions in a system, which is achieved by replacing more and more dedicated, single-function hardware by software components running on programmable platforms. This increases the re-usability of the functions, but their synergy requires also that (parts of) the multiple software functions share the same embedded platform. In this work, we look at the composition of inter-dependent software functions on a shared platform from a timing perspective. We consider platforms comprised of one preemptive processor resource and, optionally, multiple non-preemptive resources. Each function is implemented by a set of tasks; the group of tasks of a function that executes on the same processor, along with its scheduler, is called a component. The tasks of a component typically have hard timing constraints. Fulfilling these timing constraints of a component requires analysis. Looking at a single function, co-operative scheduling of the tasks within a component has already proven to be a powerful tool to make the implementation of a function more predictable. For example, co-operative scheduling can accelerate the execution of a task (making it easier to satisfy timing constraints), it can reduce the cost of arbitrary preemptions (leading to more realistic execution-time estimates) and it can guarantee access to other resources without the need for arbitration by other protocols. Since timeliness is an important functional requirement, (re-)use of a component for composition and integration on a platform must deal with timing. To enable us to analyze and specify the timing requirements of a particular component in isolation from other components, we reserve and enforce the availability of all its specified resources during run-time. The real-time systems community has proposed hierarchical scheduling frameworks (HSFs) to implement this isolation between components. After admitting a component on a shared platform, a component in an HSF keeps meeting its timing constraints as long as it behaves as specified. If it violates its specification, it may be penalized, but other components are temporally isolated from the malignant effects. A component in an HSF is said to execute on a virtual platform with a dedicated processor at a speed proportional to its reserved processor supply. Three effects disturb this point of view. Firstly, processor time is supplied discontinuously. Secondly, the actual processor is faster. Thirdly, the HSF no longer guarantees the isolation of an individual component when two arbitrary components violate their specification during access to non-preemptive resources, even when access is arbitrated via well-defined real-time protocols. The scientific contributions of this work focus on these three issues. Our solutions to these issues cover the system design from component requirements to run-time allocation. Firstly, we present a novel scheduling method that enables us to integrate the component into an HSF. It guarantees that each integrated component executes its tasks exactly in the same order regardless of a continuous or a discontinuous supply of processor time. Using our method, the component executes on a virtual platform and it only experiences that the processor speed is different from the actual processor speed. As a result, we can focus on the traditional scheduling problem of meeting deadline constraints of tasks on a uni-processor platform. For such platforms, we show how scheduling tasks co-operatively within a component helps to meet the deadlines of this component. We compare the strength of these cooperative scheduling techniques to theoretically optimal schedulers. Secondly, we standardize the way of computing the resource requirements of a component, even in the presence of non-preemptive resources. We can therefore apply the same timing analysis to the components in an HSF as to the tasks inside, regardless of their scheduling or their protocol being used for non-preemptive resources. This increases the re-usability of the timing analysis of components. We also make non-preemptive resources transparent during the development cycle of a component, i.e., the developer of a component can be unaware of the actual protocol being used in an HSF. Components can therefore be unaware that access to non-preemptive resources requires arbitration. Finally, we complement the existing real-time protocols for arbitrating access to non-preemptive resources with mechanisms to confine temporal faults to those components in the HSF that share the same non-preemptive resources. We compare the overheads of sharing non-preemptive resources between components with and without mechanisms for confinement of temporal faults. We do this by means of experiments within an HSF-enabled real-time operating system

    VSRS: Variable Service Rate Scheduler for Low Rate Wireless Sensor Networks

    Get PDF
    This paper proposes a variable service rate scheduler VSRS for heterogeneous wireless sensor and actuator networks (WSANs). Due to recent advancement, various applications are being upgraded using sensor networks. Generally, traffic consists of delay sensitive and delay tolerant applications. Handling such traffic simultaneously is a critical challenge in IEEE 802.15.4 sensor network. However, the standard CSMA/CA does not focus on traffic-based data delivery. Therefore, this paper presents a solution for prioritybased traffic over no-priority i.e. regular traffic using CSMA/CA IEEE 802.15.4 MAC sublayer. The VSRS scheduler uses a queuing model for scheduling incoming traffic at an actor node using a dual queue. The scheduler updates priority of each incoming packet dynamically using network priority weight metric. The VSRS scheduler scans queues and picks the highest network priority packet. A packet weight is updated after selection from the respective queue. This core operation of an actor node offers good packet delivery ratio, throughput, and less delay experience of long distance traveled packets against no priority traffic. The work is validated using theoretical analysis and computer generated network simulators; proves that the priority based approach using weight factor works better over the First-Come-First-Serve (FCFS) mechanism
    • …
    corecore