3 research outputs found
Control Policies for Queueing Systems with Removable Servers and Energy Considerations
108 pagesIn data centers, response time is typically of paramount concern. Systems are built with excess capacity to handle peak demand. This leads to servers idling for long periods during which time they continue to consume energy. However, it is not necessarily a simple decision to turn servers off. Servers require a warming period when they are turned on. Thus, turning a server off now can lead to long delays in the future and may not even lead to a reduction in energy usage as servers also consume energy while warming. As the financial and economic impact of energy consumption increases, new analysis is needed to identify policies that balance the trade-off between energy usage and delay performance.  We study two queueing models and propose a class of simple and intuitive policies for each. First, we consider an M/M/1 queue with a removable server that dynamically chooses its service rate from a set of finitely many rates. If the server is off, the system must warm-up for a random, exponentially distributed amount of time, before it can begin processing jobs. We show under the average cost criterion, that work conserving policies are optimal. We then demonstrate that the optimal policy can be characterized by a threshold for turning on the server and the optimal service rate increases monotonically with the number of jobs in the system. Finally, we present numerical experiments to provide insights into the practicality of having both a removable server and service rate control. Next, we consider a parallel queueing system with K removable servers where jobs must be routed to a server upon arrival. We propose a class of policies for the joint routing and server power status control problem called delay-JSQ policies. Delay-JSQ policies turn additional servers on when the queue lengths at all non-empty stations exceed some threshold and route arriving jobs to the shortest non-empty queue. We show that these policies have the same stability region as \textit{join the shortest queue} routing where servers cannot turn off. We show that in the two-server setting, in the heavy traffic limit, delay-JSQ policies are optimal in terms of minimizing the holding cost rate is incurred while also incurring zero warming cost. We conclude with numerical experiments and see that delay-JSQ policies automatically and without knowledge of system parameters, adjust the number of servers to match the current capacity to the current deman
Modeling Internet Traffic Generations Based on Users and Activities for Telecommunication Applications
A traffic generation model is a stochastic model of the data flow in a communication network. These models are useful during the development of telecommunication technologies and for analyzing the performance and capacity of various protocols, algorithms, and network topologies. We present here two modeling approaches for simulating internet traffic. In our models, we simulate the length and interarrival times of individual packets, the discrete unit of data transfer over the internet. Our first modeling approach is based on fitting data to known theoretical distributions. The second method utilizes empirical copulae and is completely data driven. Our models were based on internet traffic data generated by different individuals performing specific tasks (e.g. web-browsing, video streaming, and online gaming). When combined, these models can be used to simulate internet traffic from multiple individuals performing typical tasks