828 research outputs found

    Random trees in queueing systems with deadlines

    Get PDF
    AbstractWe survey our research on scheduling aperiodic tasks in real-time systems in order to illustrate the benefits of modelling queueing systems by means of random trees. Relying on a discrete-time single-server queueing system, we investigated deadline meeting properties of several scheduling algorithms employed for servicing probabilistically arriving tasks, characterized by arbitrary arrival and execution time distributions and a constant service time deadline T. Taking a non-queueing theory approach (i.e., without stable-stable assumptions) we found that the probability distribution of the random time sT where such a system operates without violating any task's deadline is approximately exponential with parameter λT = 1μT, with the expectation E[sT] = μT growing exponentially in T. The value μT depends on the particular scheduling algorithm, and its derivation is based on the combinatorial and asymptotic analysis of certain random trees. This paper demonstrates that random trees provide an efficient common framework to deal with different scheduling disciplines and gives an overview of the various combinatorial and asymptotic methods used in the appropriate analysis

    MORPHOSYS: efficient colocation of QoS-constrained workloads in the cloud

    Full text link
    In hosting environments such as IaaS clouds, desirable application performance is usually guaranteed through the use of Service Level Agreements (SLAs), which specify minimal fractions of resource capacities that must be allocated for use for proper operation. Arbitrary colocation of applications with different SLAs on a single host may result in inefficient utilization of the host’s resources. In this paper, we propose that periodic resource allocation and consumption models be used for a more granular expression of SLAs. Our proposed SLA model has the salient feature that it exposes flexibilities that enable the IaaS provider to safely transform SLAs from one form to another for the purpose of achieving more efficient colocation. Towards that goal, we present MorphoSys: a framework for a service that allows the manipulation of SLAs to enable efficient colocation of workloads. We present results from extensive trace-driven simulations of colocated Video-on-Demand servers in a cloud setting. The results show that potentially-significant reduction in wasted resources (by as much as 60%) are possible using MorphoSys.First author draf

    Parallel Real-Time Scheduling for Latency-Critical Applications

    Get PDF
    In order to provide safety guarantees or quality of service guarantees, many of today\u27s systems consist of latency-critical applications, e.g. applications with timing constraints. The problem of scheduling multiple latency-critical jobs on a multiprocessor or multicore machine has been extensively studied for sequential (non-parallizable) jobs and different system models and different objectives have been considered. However, the computational requirement of a single job is still limited by the capacity of a single core. To provide increasingly complex functionalities of applications and to complete their higher computational demands within the same or even more stringent timing constraints, we must exploit the internal parallelism of jobs, where individual jobs are parallel programs and can potentially utilize more than one core in parallel. However, there is little work considering scheduling multiple parallel jobs that are latency-critical. This dissertation focuses on developing new scheduling strategies, analysis tools, and practical platform design techniques to enable efficient and scalable parallel real-time scheduling for latency-critical applications on multicore systems. In particular, the research is focused on two types of systems: (1) static real-time systems for tasks with deadlines where the temporal properties of the tasks that need to execute is known a priori and the goal is to guarantee the temporal correctness of the tasks prior to their executions; and (2) online systems for latency-critical jobs where multiple jobs arrive over time and the goal to optimize for a performance objective of jobs during the execution. For static real-time systems for parallel tasks, several scheduling strategies, including global earliest deadline first, global rate monotonic and a novel federated scheduling, are proposed, analyzed and implemented. These scheduling strategies have the best known theoretical performance for parallel real-time tasks under any global strategy, any fixed priority scheduling and any scheduling strategy, respectively. In addition, federated scheduling is generalized to systems with multiple criticality levels and systems with stochastic tasks. Both numerical and empirical experiments show that federated scheduling and its variations have good schedulability performance and are efficient in practice. For online systems with multiple latency-critical jobs, different online scheduling strategies are proposed and analyzed for different objectives, including maximizing the number of jobs meeting a target latency, maximizing the profit of jobs, minimizing the maximum latency and minimizing the average latency. For example, a simple First-In-First-Out scheduler is proven to be scalable for minimizing the maximum latency. Based on this theoretical intuition, a more practical work-stealing scheduler is developed, analyzed and implemented. Empirical evaluations indicate that, on both real world and synthetic workloads, this work-stealing implementation performs almost as well as an optimal scheduler

    Control of multiclass queueing systems with abandonments and adversarial customers

    Get PDF
    This thesis considers the defensive surveillance of multiple public areas which are the open, exposed targets of adversarial attacks. We address the operational problem of identifying a real time decision-making rule for a security team in order to minimise the damage an adversary can inflict within the public areas. We model the surveillance scenario as a multiclass queueing system with customer abandonments, wherein the operational problem translates into developing service policies for a server in order to minimise the expected damage an adversarial customer can inflict on the system. We consider three different surveillance scenarios which may occur in realworld security operations. In each scenario it is only possible to calculate optimal policies in small systems or in special cases, hence we focus on developing heuristic policies which can be computed and demonstrate their effectiveness in numerical experiments. In the random adversary scenario, the adversary attacks the system according to a probability distribution known to the server. This problem is a special case of a more general stochastic scheduling problem. We develop new results which complement the existing literature based on priority policies and an effective approximate policy improvement algorithm. We also consider the scenario of a strategic adversary who chooses where to attack. We model the interaction of the server and adversary as a two-person zero-sum game. We develop an effective heuristic based on an iterative algorithm which populates a small set of service policies to be randomised over. Finally, we consider the scenario of a strategic adversary who chooses both where and when to attack and formulate it as a robust optimisation problem. In this case, we demonstrate the optimality of the last-come first-served policy in single queue systems. In systems with multiple queues, we develop effective heuristic policies based on the last-come first-served policy which incorporates randomisation both within service policies and across service policies

    A New Competitive Ratio for Network Applications with Hard Performance Guarantee

    Get PDF
    Online algorithms are used to solve the problems which need to make decisions without future knowledge. Competitive ratio is used to evaluate the performance of an online algorithm. This ratio is the worst-case ratio between the performance of the online algorithm and the offline optimal algorithm. However, the competitive ratios in many current studies are relatively low and thus cannot satisfy the need of the customers in practical applications. To provide a better service, a practice for service provider is to add more redundancy to the system. Thus we have a new problem which is to quantify the relation between the amount of increased redundancy and the system performance. In this dissertation, to address the problem that the competitive ratio is not satisfactory, we ask the question: How much redundancy should be increased to fulfill certain performance guarantee? Based on this question, we will define a new competitive ratio showing the relation between the system redundancy and performance of online algorithm compared to offline algorithm. We will study three applications in network applications. We propose online algorithms to solve the problems and study the competitive ratio. To evaluate the performances, we further study the optimal online algorithms and some other commonly used algorithms as comparison. We first study the application of online scheduling for delay-constrained mobile offloading. WiFi offloading, where mobile users opportunistically obtain data through WiFi rather than through cellular networks, is a promising technique to greatly improve spectrum efficiency and reduce cellular network congestion. We consider a system where the service provider deploys multiple WiFi hotspots to offload mobile traffic with unpredictable mobile users’ movements. Then we study online job allocation with hard allocation ratio requirement. We consider that jobs of various types arrive in some unpredictable pattern and the system is required to allocate a certain ratio of jobs. We then aim to find the minimum capacity needed to meet a given allocation ratio requirement. Third, we study online routing in multi-hop network with end-to-end deadline. We propose reliable online algorithms to schedule packets with unpredictable arriving information and stringent end-to-end deadline in the network
    • …
    corecore