ICT systems, especially data centers, consume a significant amount of energy in our daily life. With the rapidly increasing number and size of data centers, energy management is becoming essential. Thus, it is beneficial if the used energy in data centers can be utilized more efficiently.
In this thesis, we analyze the energy-aware performance of queueing systems from the traffic point of view. The focus will be on using queueing theory to model and analyze a single processor in data centers. In data centers, the energy consumed by a processor depends on the processing speed. With higher speed, more energy is consumed, while with lower speed, the performance will be decreased. Thus, we consider the trade-off between the performance and energy consumption of processors. Based on this, we introduce a speed scaling method, which adjusts the processing speed of processors according to the traffic load of the queueing system. We mainly analyze and compare three optimized speed scaling methods, which are static, gated and linear speed scaling. In the gated and linear schemes, there is a switching delay when the processor is switched from the idle state to the busy state.
The results demonstrate that the switching delay has a great impact on the optimized trade-off. In our scenario, without switching delay, gated and linear schemes have the same performance, and they are better than the static scheme. With
switching delay, however, the linear scheme is always better than the gated scheme. With a long switching delay, even the static scheme can be better. In practice, the trade-off of our model is highly affected by the parameters in the model