272 research outputs found
Fairness in overloaded parallel queues
Maximizing throughput for heterogeneous parallel server queues has received
quite a bit of attention from the research community and the stability region
for such systems is well understood. However, many real-world systems have
periods where they are temporarily overloaded. Under such scenarios, the
unstable queues often starve limited resources. This work examines what happens
during periods of temporary overload. Specifically, we look at how to fairly
distribute stress. We explore the dynamics of the queue workloads under the
MaxWeight scheduling policy during long periods of stress and discuss how to
tune this policy in order to achieve a target fairness ratio across these
workloads
A reiterative method for calculating the early bactericidal activity of antituberculosis drugs.
Studies of early bactericidal activity (EBA) are important in the rapid evaluation of new antituberculosis drugs. Historically, these have concentrated on the log fall in the viable count in sputum during the first 48 hours of therapy. In this paper, we provide a mathematical model that suggests that the viable count in sputum follows an exponential decay curve with the equation V = S + Me(-kt) (where V is the viable count, M the population of bacteria susceptible to the test drug, S the population susceptible only to sterilizing agents, t the day of sputum collection as related to start of therapy, k the rate constant for the bacteria killed each day, and e the Napierian constant). We demonstrate that data from clinical trials fits the exponential decay model. We propose that future EBA studies should be performed by measuring daily quantitative counts for at least 5 days. We also propose that the comparison of the early bactericidal activity of antituberculosis drugs should be evaluated using the time taken to reduce the viable count by 50% (vt(50)). A further reiterative refinement following a rule set based on statistically the best fit to the exponential decay model is described that will allow investigators to identify anomalous results and thus enhance the accuracy in measuring early bactericidal activity
On the convergence of mirror descent beyond stochastic convex programming
In this paper, we examine the convergence of mirror descent in a class of
stochastic optimization problems that are not necessarily convex (or even
quasi-convex), and which we call variationally coherent. Since the standard
technique of "ergodic averaging" offers no tangible benefits beyond convex
programming, we focus directly on the algorithm's last generated sample (its
"last iterate"), and we show that it converges with probabiility if the
underlying problem is coherent. We further consider a localized version of
variational coherence which ensures local convergence of stochastic mirror
descent (SMD) with high probability. These results contribute to the landscape
of non-convex stochastic optimization by showing that (quasi-)convexity is not
essential for convergence to a global minimum: rather, variational coherence, a
much weaker requirement, suffices. Finally, building on the above, we reveal an
interesting insight regarding the convergence speed of SMD: in problems with
sharp minima (such as generic linear programs or concave minimization
problems), SMD reaches a minimum point in a finite number of steps (a.s.), even
in the presence of persistent gradient noise. This result is to be contrasted
with existing black-box convergence rate estimates that are only asymptotic.Comment: 30 pages, 5 figure
- …