22 research outputs found
Diffusion limits for shortest remaining processing time queues
We present a heavy traffic analysis for a single server queue with renewal
arrivals and generally distributed i.i.d. service times, in which the server
employs the Shortest Remaining Processing Time (SRPT) policy. Under typical
heavy traffic assumptions, we prove a diffusion limit theorem for a
measure-valued state descriptor, from which we conclude a similar theorem for
the queue length process. These results allow us to make some observations on
the queue length optimality of SRPT. In particular, they provide the sharpest
illustration of the well-known tension between queue length optimality and
quality of service for this policy.Comment: 19 pages; revised, fixed typos. To appear in Stochastic System
Resource Utilization Prediction: A Proposal for Information Technology Research
Research into predicting long-term resource needs has been faced with a very difficult problem of extending the accuracy period beyond the immediate future. Business forecasting has overcome this limitation by successfully incorporating the concept of human interaction as the basis of prediction patterns at the hourly, daily, weekly, monthly, and yearly time frames. Computer resource utilization is also impacted by human interaction therefore influencing research into predictability of resource usage based on human access patterns. Emulated human web server access data was captured in a feasibility study that used time series analysis to predict future resource usage. For prediction beyond several minutes, results indicate that the majority of projected resource usage was within an 80% confidence level thus supporting the foundation of future resource prediction work in this area
Mixed-Criticality on the AFDX Network: Challenges and Potential Solutions
In this paper, we first assess the most relevant existing solutions enabling mixed-criticality on the AFDX and select the most adequate one. Afterwards, the specification of an extended AFDX, based on the Burst-Limiting Shaper (BLS), is detailed to fulfill the main avionics requirements and challenges. Finally, the preliminary evaluation of such a proposal is conducted through simulations. Results show its ability to guarantee the highest criticality traffic constraints, while limiting its impact on the current AFDX traffic
Minimizing the Worst Slowdown: Off-Line and On-Line
Minimizing the slowdown (expected sojourn time divided by job size) is a key concern of fairness in scheduling and queuing problems where job sizes are very heterogeneous. We look for protocols (service disciplines) capping the worst slowdown (called here liability) a job may face no matter how large (or small) the other jobs are. In the scheduling problem (all jobs released at the same time), allowing the server to randomize the order of service cuts almost in half the liability profiles feasible under deterministic protocols. The same statement holds if cash transfers are feasible and users have linear waiting costs.
Heavy-Tailed Limits for Medium Size Jobs and Comparison Scheduling
We study the conditional sojourn time distributions of processor sharing
(PS), foreground background processor sharing (FBPS) and shortest remaining
processing time first (SRPT) scheduling disciplines on an event where the job
size of a customer arriving in stationarity is smaller than exactly k>=0 out of
the preceding m>=k arrivals. Then, conditioning on the preceding event, the
sojourn time distribution of this newly arriving customer behaves
asymptotically the same as if the customer were served in isolation with a
server of rate (1-\rho)/(k+1) for PS/FBPS, and (1-\rho) for SRPT, respectively,
where \rho is the traffic intensity. Hence, the introduced notion of
conditional limits allows us to distinguish the asymptotic performance of the
studied schedulers by showing that SRPT exhibits considerably better asymptotic
behavior for relatively smaller jobs than PS/FBPS.
Inspired by the preceding results, we propose an approximation to the SRPT
discipline based on a novel adaptive job grouping mechanism that uses relative
size comparison of a newly arriving job to the preceding m arrivals.
Specifically, if the newly arriving job is smaller than k and larger than m-k
of the previous m jobs, it is routed into class k. Then, the classes of smaller
jobs are served with higher priorities using the static priority scheduling.
The good performance of this mechanism, even for a small number of classes m+1,
is demonstrated using the asymptotic queueing analysis under the heavy-tailed
job requirements. We also discuss refinements of the comparison grouping
mechanism that improve the accuracy of job classification at the expense of a
small additional complexity.Comment: 26 pages, 2 figure