9,755 research outputs found

    FCFS Parallel Service Systems and Matching Models

    Get PDF
    We consider three parallel service models in which customers of several types are served by several types of servers subject to a bipartite compatibility graph, and the service policy is first come first served. Two of the models have a fixed set of servers. The first is a queueing model in which arriving customers are assigned to the longest idling compatible server if available, or else queue up in a single queue, and servers that become available pick the longest waiting compatible customer, as studied by Adan and Weiss, 2014. The second is a redundancy service model where arriving customers split into copies that queue up at all the compatible servers, and are served in each queue on FCFS basis, and leave the system when the first copy completes service, as studied by Gardner et al., 2016. The third model is a matching queueing model with a random stream of arriving servers. Arriving customers queue in a single queue and arriving servers match with the first compatible customer and leave immediately with the customer, or they leave without a customer. The last model is relevant to organ transplants, to housing assignments, to adoptions and many other situations. We study the relations between these models, and show that they are closely related to the FCFS infinite bipartite matching model, in which two infinite sequences of customers and servers of several types are matched FCFS according to a bipartite compatibility graph, as studied by Adan et al., 2017. We also introduce a directed bipartite matching model in which we embed the queueing systems. This leads to a generalization of Burke's theorem to parallel service systems

    Modeling Stochastic Lead Times in Multi-Echelon Systems

    Get PDF
    In many multi-echelon inventory systems, the lead times are random variables. A common and reasonable assumption in most models is that replenishment orders do not cross, which implies that successive lead times are correlated. However, the process that generates such lead times is usually not well defined, which is especially a problem for simulation modeling. In this paper, we use results from queuing theory to define a set of simple lead time processes guaranteeing that (a) orders do not cross and (b) prespecified means and variances of all lead times in the multiechelon system are attained

    Run Time Approximation of Non-blocking Service Rates for Streaming Systems

    Full text link
    Stream processing is a compute paradigm that promises safe and efficient parallelism. Modern big-data problems are often well suited for stream processing's throughput-oriented nature. Realization of efficient stream processing requires monitoring and optimization of multiple communications links. Most techniques to optimize these links use queueing network models or network flow models, which require some idea of the actual execution rate of each independent compute kernel within the system. What we want to know is how fast can each kernel process data independent of other communicating kernels. This is known as the "service rate" of the kernel within the queueing literature. Current approaches to divining service rates are static. Modern workloads, however, are often dynamic. Shared cloud systems also present applications with highly dynamic execution environments (multiple users, hardware migration, etc.). It is therefore desirable to continuously re-tune an application during run time (online) in response to changing conditions. Our approach enables online service rate monitoring under most conditions, obviating the need for reliance on steady state predictions for what are probably non-steady state phenomena. First, some of the difficulties associated with online service rate determination are examined. Second, the algorithm to approximate the online non-blocking service rate is described. Lastly, the algorithm is implemented within the open source RaftLib framework for validation using a simple microbenchmark as well as two full streaming applications.Comment: technical repor

    The MDS Queue: Analysing the Latency Performance of Erasure Codes

    Full text link
    In order to scale economically, data centers are increasingly evolving their data storage methods from the use of simple data replication to the use of more powerful erasure codes, which provide the same level of reliability as replication but at a significantly lower storage cost. In particular, it is well known that Maximum-Distance-Separable (MDS) codes, such as Reed-Solomon codes, provide the maximum storage efficiency. While the use of codes for providing improved reliability in archival storage systems, where the data is less frequently accessed (or so-called "cold data"), is well understood, the role of codes in the storage of more frequently accessed and active "hot data", where latency is the key metric, is less clear. In this paper, we study data storage systems based on MDS codes through the lens of queueing theory, and term this the "MDS queue." We analytically characterize the (average) latency performance of MDS queues, for which we present insightful scheduling policies that form upper and lower bounds to performance, and are observed to be quite tight. Extensive simulations are also provided and used to validate our theoretical analysis. We also employ the framework of the MDS queue to analyse different methods of performing so-called degraded reads (reading of partial data) in distributed data storage

    Private operators and time-of-day tolling on a congested road network

    Get PDF
    Private-sector involvement in the construction and operation of roads is growing around the world and private toll roads are seen as a useful tool in the battle against congestion. Yet serious concerns remain about exercise of monopoly power if private operators can set tolls freely. A number of theoretical studies have investigated private toll-road pricing strategies, and compared them with first-best and second-best public tolls. But most of the analyses have employed simple road networks and/or used static models that do not capture the temporal dimension of congestion or describe the impacts of tolling schemes that vary by time of day. This paper takes a fresh look at private toll road pricing using METROPOLIS: a dynamic traffic simulator that treats endogenously choices of transport mode, departure time and route at the level of individual travellers. Simulations are performed for the peak-period morning commute on a stylized urban road network with jobs concentrated towards the centre of the city. Tolling scenarios are defined in terms of what is tolled (traffic lanes, whole links, or toll rings) and how tolls are varied over time. Three administration regimes are compared. The first two are the standard polar cases: social surplus maximization by a public-sector operator, and unconstrained profit maximization by a private-sector operator. The third regime entails varying tolls in steps to eliminate queuing on the tolled links. It is a form of third-best tolling that could be implemented either by a public operator or by the private sector under quality-of-service regulation. Amongst the results it is found that the no-queue tolling regime performs favourably compared to public step tolling, and invariably better than private tolling. Another provisional finding is that a private operator has less incentive than does a public operator to implement time-of-day congestion pricing.

    Queueing models for token and slotted ring networks

    Get PDF
    Currently the end-to-end delay characteristics of very high speed local area networks are not well understood. The transmission speed of computer networks is increasing, and local area networks especially are finding increasing use in real time systems. Ring networks operation is generally well understood for both token rings and slotted rings. There is, however, a severe lack of queueing models for high layer operation. There are several factors which contribute to the processing delay of a packet, as opposed to the transmission delay, e.g., packet priority, its length, the user load, the processor load, the use of priority preemption, the use of preemption at packet reception, the number of processors, the number of protocol processing layers, the speed of each processor, and queue length limitations. Currently existing medium access queueing models are extended by adding modeling techniques which will handle exhaustive limited service both with and without priority traffic, and modeling capabilities are extended into the upper layers of the OSI model. Some of the model are parameterized solution methods, since it is shown that certain models do not exist as parameterized solutions, but rather as solution methods
    • 

    corecore