1,439 research outputs found

    Truth and Regret in Online Scheduling

    Full text link
    We consider a scheduling problem where a cloud service provider has multiple units of a resource available over time. Selfish clients submit jobs, each with an arrival time, deadline, length, and value. The service provider's goal is to implement a truthful online mechanism for scheduling jobs so as to maximize the social welfare of the schedule. Recent work shows that under a stochastic assumption on job arrivals, there is a single-parameter family of mechanisms that achieves near-optimal social welfare. We show that given any such family of near-optimal online mechanisms, there exists an online mechanism that in the worst case performs nearly as well as the best of the given mechanisms. Our mechanism is truthful whenever the mechanisms in the given family are truthful and prompt, and achieves optimal (within constant factors) regret. We model the problem of competing against a family of online scheduling mechanisms as one of learning from expert advice. A primary challenge is that any scheduling decisions we make affect not only the payoff at the current step, but also the resource availability and payoffs in future steps. Furthermore, switching from one algorithm (a.k.a. expert) to another in an online fashion is challenging both because it requires synchronization with the state of the latter algorithm as well as because it affects the incentive structure of the algorithms. We further show how to adapt our algorithm to a non-clairvoyant setting where job lengths are unknown until jobs are run to completion. Once again, in this setting, we obtain truthfulness along with asymptotically optimal regret (within poly-logarithmic factors)

    Run Generation Revisited: What Goes Up May or May Not Come Down

    Full text link
    In this paper, we revisit the classic problem of run generation. Run generation is the first phase of external-memory sorting, where the objective is to scan through the data, reorder elements using a small buffer of size M , and output runs (contiguously sorted chunks of elements) that are as long as possible. We develop algorithms for minimizing the total number of runs (or equivalently, maximizing the average run length) when the runs are allowed to be sorted or reverse sorted. We study the problem in the online setting, both with and without resource augmentation, and in the offline setting. (1) We analyze alternating-up-down replacement selection (runs alternate between sorted and reverse sorted), which was studied by Knuth as far back as 1963. We show that this simple policy is asymptotically optimal. Specifically, we show that alternating-up-down replacement selection is 2-competitive and no deterministic online algorithm can perform better. (2) We give online algorithms having smaller competitive ratios with resource augmentation. Specifically, we exhibit a deterministic algorithm that, when given a buffer of size 4M , is able to match or beat any optimal algorithm having a buffer of size M . Furthermore, we present a randomized online algorithm which is 7/4-competitive when given a buffer twice that of the optimal. (3) We demonstrate that performance can also be improved with a small amount of foresight. We give an algorithm, which is 3/2-competitive, with foreknowledge of the next 3M elements of the input stream. For the extreme case where all future elements are known, we design a PTAS for computing the optimal strategy a run generation algorithm must follow. (4) Finally, we present algorithms tailored for nearly sorted inputs which are guaranteed to have optimal solutions with sufficiently long runs

    Cloud Workload Allocation Approaches for Quality of Service Guarantee and Cybersecurity Risk Management

    Get PDF
    It has become a dominant trend in industry to adopt cloud computing --thanks to its unique advantages in flexibility, scalability, elasticity and cost efficiency -- for providing online cloud services over the Internet using large-scale data centers. In the meantime, the relentless increase in demand for affordable and high-quality cloud-based services, for individuals and businesses, has led to tremendously high power consumption and operating expense and thus has posed pressing challenges on cloud service providers in finding efficient resource allocation policies. Allowing several services or Virtual Machines (VMs) to commonly share the cloud\u27s infrastructure enables cloud providers to optimize resource usage, power consumption, and operating expense. However, servers sharing among users and VMs causes performance degradation and results in cybersecurity risks. Consequently, how to develop efficient and effective resource management policies to make the appropriate decisions to optimize the trade-offs among resource usage, service quality, and cybersecurity loss plays a vital role in the sustainable future of cloud computing. In this dissertation, we focus on cloud workload allocation problems for resource optimization subject to Quality of Service (QoS) guarantee and cybersecurity risk constraints. To facilitate our research, we first develop a cloud computing prototype that we utilize to empirically validate the performance of different proposed cloud resource management schemes under a close to practical, but also isolated and well-controlled, environment. We then focus our research on the resource management policies for real-time cloud services with QoS guarantee. Based on queuing model with reneging, we establish and formally prove a series of fundamental principles, between service timing characteristics and their resource demands, and based on which we develop several novel resource management algorithms that statically guarantee the QoS requirements for cloud users. We then study the problem of mitigating cybersecurity risk and loss in cloud data centers via cloud resource management. We employ game theory to model the VM-to-VM interdependent cybersecurity risks in cloud clusters. We then conduct a thorough analysis based on our game-theory-based model and develop several algorithms for cybersecurity risk management. Specifically, we start our cybersecurity research from a simple case with only two types of VMs and next extend it to a more general case with an arbitrary number of VM types. Our intensive numerical and experimental results show that our proposed algorithms can significantly outperform the existing methodologies for large-scale cloud data centers in terms of resource usage, cybersecurity loss, and computational effectiveness

    A Polynomial Time Algorithm for Spatio-Temporal Security Games

    Full text link
    An ever-important issue is protecting infrastructure and other valuable targets from a range of threats from vandalism to theft to piracy to terrorism. The "defender" can rarely afford the needed resources for a 100% protection. Thus, the key question is, how to provide the best protection using the limited available resources. We study a practically important class of security games that is played out in space and time, with targets and "patrols" moving on a real line. A central open question here is whether the Nash equilibrium (i.e., the minimax strategy of the defender) can be computed in polynomial time. We resolve this question in the affirmative. Our algorithm runs in time polynomial in the input size, and only polylogarithmic in the number of possible patrol locations (M). Further, we provide a continuous extension in which patrol locations can take arbitrary real values. Prior work obtained polynomial-time algorithms only under a substantial assumption, e.g., a constant number of rounds. Further, all these algorithms have running times polynomial in M, which can be very large
    corecore