4,037 research outputs found

    Controlling the workload of M/G/1 queues via the q-policy

    Get PDF
    The final publication is available at Elsevier via https://doi.org/10.1016/j.ejor.2014.12.036 © 2015. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/We consider a single-server queueing system with Poisson arrivals and generally distributed service times. To systematically control the workload of the queue, we define for each busy period an associated timer process, {R(t), t ≥ 0}, where R(t) represents the time remaining before the system is closed to potential arrivals. The process {R(t), t ≥ 0} is similar to the well-known workload process, in that it decreases at unit rate and consists of up-jumps at the arrival instants of admitted customers. However, if X represents the service requirement of an admitted customer, then the magnitude of the up-jump for the timer process occurring at the arrival instant of this customer is (1 − q)X for a fixed q ∈ [0, 1]. Consequently, there will be an instant in time within the busy period when the timer process hits level zero, at which point the system immediately closes and will remain closed until the end of the current busy period. We refer to this particular blocking policy as the q-policy. In this paper, we employ a level crossing analysis to derive the Laplace–Stieltjes transform (LST) of the steady-state waiting time distribution of serviceable customers. We conclude the paper with a numerical example which shows that controlling arrivals in this fashion can be beneficial.NSERC (Natural Sciences and Engineering Research Council of Canada

    The Value-of-Information in Matching with Queues

    Full text link
    We consider the problem of \emph{optimal matching with queues} in dynamic systems and investigate the value-of-information. In such systems, the operators match tasks and resources stored in queues, with the objective of maximizing the system utility of the matching reward profile, minus the average matching cost. This problem appears in many practical systems and the main challenges are the no-underflow constraints, and the lack of matching-reward information and system dynamics statistics. We develop two online matching algorithms: Learning-aided Reward optimAl Matching (LRAM\mathtt{LRAM}) and Dual-LRAM\mathtt{LRAM} (DRAM\mathtt{DRAM}) to effectively resolve both challenges. Both algorithms are equipped with a learning module for estimating the matching-reward information, while DRAM\mathtt{DRAM} incorporates an additional module for learning the system dynamics. We show that both algorithms achieve an O(ϵ+δr)O(\epsilon+\delta_r) close-to-optimal utility performance for any ϵ>0\epsilon>0, while DRAM\mathtt{DRAM} achieves a faster convergence speed and a better delay compared to LRAM\mathtt{LRAM}, i.e., O(δz/ϵ+log(1/ϵ)2))O(\delta_{z}/\epsilon + \log(1/\epsilon)^2)) delay and O(δz/ϵ)O(\delta_z/\epsilon) convergence under DRAM\mathtt{DRAM} compared to O(1/ϵ)O(1/\epsilon) delay and convergence under LRAM\mathtt{LRAM} (δr\delta_r and δz\delta_z are maximum estimation errors for reward and system dynamics). Our results reveal that information of different system components can play very different roles in algorithm performance and provide a systematic way for designing joint learning-control algorithms for dynamic systems

    Heavy traffic analysis of open processing networks with complete resource pooling: asymptotic optimality of discrete review policies

    Full text link
    We consider a class of open stochastic processing networks, with feedback routing and overlapping server capabilities, in heavy traffic. The networks we consider satisfy the so-called complete resource pooling condition and therefore have one-dimensional approximating Brownian control problems. We propose a simple discrete review policy for controlling such networks. Assuming 2+\epsilon moments on the interarrival times and processing times, we provide a conceptually simple proof of asymptotic optimality of the proposed policy.Comment: Published at http://dx.doi.org/10.1214/105051604000000495 in the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Two-dimensional fluid queues with temporary assistance

    Full text link
    We consider a two-dimensional stochastic fluid model with NN ON-OFF inputs and temporary assistance, which is an extension of the same model with N=1N = 1 in Mahabhashyam et al. (2008). The rates of change of both buffers are piecewise constant and dependent on the underlying Markovian phase of the model, and the rates of change for Buffer 2 are also dependent on the specific level of Buffer 1. This is because both buffers share a fixed output capacity, the precise proportion of which depends on Buffer 1. The generalization of the number of ON-OFF inputs necessitates modifications in the original rules of output-capacity sharing from Mahabhashyam et al. (2008) and considerably complicates both the theoretical analysis and the numerical computation of various performance measures

    Pilot interaction with automated airborne decision making systems

    Get PDF
    An investigation was made of interaction between a human pilot and automated on-board decision making systems. Research was initiated on the topic of pilot problem solving in automated and semi-automated flight management systems and attempts were made to develop a model of human decision making in a multi-task situation. A study was made of allocation of responsibility between human and computer, and discussed were various pilot performance parameters with varying degrees of automation. Optimal allocation of responsibility between human and computer was considered and some theoretical results found in the literature were presented. The pilot as a problem solver was discussed. Finally the design of displays, controls, procedures, and computer aids for problem solving tasks in automated and semi-automated systems was considered

    Bulk Scheduling with the DIANA Scheduler

    Full text link
    Results from the research and development of a Data Intensive and Network Aware (DIANA) scheduling engine, to be used primarily for data intensive sciences such as physics analysis, are described. In Grid analyses, tasks can involve thousands of computing, data handling, and network resources. The central problem in the scheduling of these resources is the coordinated management of computation and data at multiple locations and not just data replication or movement. However, this can prove to be a rather costly operation and efficient sing can be a challenge if compute and data resources are mapped without considering network costs. We have implemented an adaptive algorithm within the so-called DIANA Scheduler which takes into account data location and size, network performance and computation capability in order to enable efficient global scheduling. DIANA is a performance-aware and economy-guided Meta Scheduler. It iteratively allocates each job to the site that is most likely to produce the best performance as well as optimizing the global queue for any remaining jobs. Therefore it is equally suitable whether a single job is being submitted or bulk scheduling is being performed. Results indicate that considerable performance improvements can be gained by adopting the DIANA scheduling approach.Comment: 12 pages, 11 figures. To be published in the IEEE Transactions in Nuclear Science, IEEE Press. 200
    corecore