177 research outputs found

    Evaluating System Performance in Gigabit Networks

    Get PDF
    With the current wide deployment of Gigabit Ethernet technology in the backbone and workgroup switches, the network performance bottleneck has shifted for the first time in nearly a decade from the network to the end hosts and servers. This dramatic bandwidth increase calls for optimizations and good design considerations in many key components of the hosts and servers. These key components include network adaptor, operating system, protocol stack, memory, and processing power. More importantly the high bandwidth increase can negatively impact the OS performance due to the interrupt overhead caused by the incoming gigabit traffic. This paper presents models and analytical techniques for studying such a negative impact. We first present an analytical model for the ideal system when interrupt overhead is ignored. We then present two models which describe the impact of high interrupt rate on system throughput. One model is for network adaptors not equipped with DMA engines, and the other model is for network adaptors equipped with DMA engines. In addition we study the system performance when using different system delivery options of packet data to user applications. Results from both simulations and reported experimental findings show that our analytical models are valid and give a good approximation

    Evaluating System Performance in Gigabit Networks

    Get PDF
    With the current wide deployment of Gigabit Ethernet technology in the backbone and workgroup switches, the network performance bottleneck has shifted for the first time in nearly a decade from the network to the end hosts and servers. This dramatic bandwidth increase calls for optimizations and good design considerations in many key components of the hosts and servers. These key components include network adaptor, operating system, protocol stack, memory, and processing power. More importantly the high bandwidth increase can negatively impact the OS performance due to the interrupt overhead caused by the incoming gigabit traffic. This paper presents models and analytical techniques for studying such a negative impact. We first present an analytical model for the ideal system when interrupt overhead is ignored. We then present two models which describe the impact of high interrupt rate on system throughput. One model is for network adaptors not equipped with DMA engines, and the other model is for network adaptors equipped with DMA engines. In addition we study the system performance when using different system delivery options of packet data to user applications. Results from both simulations and reported experimental findings show that our analytical models are valid and give a good approximation

    Evaluating System Performance in Gigabit Networks

    Get PDF
    With the current wide deployment of Gigabit Ethernet technology in the backbone and workgroup switches, the network performance bottleneck has shifted for the first time in nearly a decade from the network to the end hosts and servers. This dramatic bandwidth increase calls for optimizations and good design considerations in many key components of the hosts and servers. These key components include network adaptor, operating system, protocol stack, memory, and processing power. More importantly the high bandwidth increase can negatively impact the OS performance due to the interrupt overhead caused by the incoming gigabit traffic. This paper presents models and analytical techniques for studying such a negative impact. We first present an analytical model for the ideal system when interrupt overhead is ignored. We then present two models which describe the impact of high interrupt rate on system throughput. One model is for network adaptors not equipped with DMA engines, and the other model is for network adaptors equipped with DMA engines. In addition we study the system performance when using different system delivery options of packet data to user applications. Results from both simulations and reported experimental findings show that our analytical models are valid and give a good approximation

    Two Analytical Models for Evaluating Performance of Gigabit Ethernet Hosts with Finite Buffer

    Get PDF
    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts with limited-size or finite buffer. Under heavy network traffic such as that of Gigabit Ethernet, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, packet loss, excessive latency and significant degradation in system throughput can be experienced. Also, user applications may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper, we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based on Markov processes and queueing theory, while the second, which is more accurate but more complex, is a pure Markov process. The models yield equations for a number of important system performance metrics. These performance metrics include throughput, latency, packet loss, stability condition, CPU utilizations of interrupt handling and protocol processing, and CPU availability for user applications. Both models yield closed-form solutions and equations that are either mathematically equivalent or very closely matching. Our analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation

    Evaluating System Performance in Gigabit Networks

    Get PDF
    With the current wide deployment of Gigabit Ethernet technology in the backbone and workgroup switches, the network performance bottleneck has shifted for the first time in nearly a decade from the network to the end hosts and servers. This dramatic bandwidth increase calls for optimizations and good design considerations in many key components of the hosts and servers. These key components include network adaptor, operating system, protocol stack, memory, and processing power. More importantly the high bandwidth increase can negatively impact the OS performance due to the interrupt overhead caused by the incoming gigabit traffic. This paper presents models and analytical techniques for studying such a negative impact. We first present an analytical model for the ideal system when interrupt overhead is ignored. We then present two models which describe the impact of high interrupt rate on system throughput. One model is for network adaptors not equipped with DMA engines, and the other model is for network adaptors equipped with DMA engines. In addition we study the system performance when using different system delivery options of packet data to user applications. Results from both simulations and reported experimental findings show that our analytical models are valid and give a good approximation

    Two Analytical Models for Evaluating Performance of Gigabit Ethernet Hosts with Finite Buffer

    Get PDF
    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts with limited-size or finite buffer. Under heavy network traffic such as that of Gigabit Ethernet, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, packet loss, excessive latency and significant degradation in system throughput can be experienced. Also, user applications may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper, we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based on Markov processes and queueing theory, while the second, which is more accurate but more complex, is a pure Markov process. The models yield equations for a number of important system performance metrics. These performance metrics include throughput, latency, packet loss, stability condition, CPU utilizations of interrupt handling and protocol processing, and CPU availability for user applications. Both models yield closed-form solutions and equations that are either mathematically equivalent or very closely matching. Our analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation

    Throughput-Delay Analysis of Interrupt-Driven Kernels with DMA Enabled and Disabled in High-Speed Networks

    Get PDF
    Interrupt processing can be a major bottleneck in the end-to-end performance of high-speed networks. The performance of Gigabit network end hosts or servers can be severely degraded due to interrupt overhead caused by heavy incoming traffic. Under heavy network traffic, the system performance will be negatively affected due to interrupt overhead caused by the incoming traffic. In particular, excessive latency and significant degradation in system throughput can be experienced. In this paper, we present a throughput-delay analysis of such behavior. We develop analytical models based on queueing theory and Markov processes. In our analysis, we consider and model three systems: ideal, PIO, and DMA. In ideal system, the interrupt overhead is ignored. In PIO, DMA is disabled and copying of incoming packets is performed by the CPU. In DMA, copying of incoming packet is performed by DMA engines. For high-speed network hosts, both PIO and DMA can be desirable configuration options. The analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. Simulations and reported experimental results show that our analytical models are valid and give a good approximation

    Two Analytical Models for Evaluating Performance of Gigabit Ethernet Hosts with Finite Buffer

    Get PDF
    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts with limited-size or finite buffer. Under heavy network traffic such as that of Gigabit Ethernet, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, packet loss, excessive latency and significant degradation in system throughput can be experienced. Also, user applications may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper, we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based on Markov processes and queueing theory, while the second, which is more accurate but more complex, is a pure Markov process. The models yield equations for a number of important system performance metrics. These performance metrics include throughput, latency, packet loss, stability condition, CPU utilizations of interrupt handling and protocol processing, and CPU availability for user applications. Both models yield closed-form solutions and equations that are either mathematically equivalent or very closely matching. Our analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation

    Two Analytical Models (with Infinite Buffer) for Evaluating Performance of Gigabit Ethernet Hosts

    Get PDF
    Two analytical models are developed to study the impact of interrupt overhead on operating system performance of network hosts when subjected to Gigabit network traffic. Under heavy network traffic, the system performance will be negatively affected due to interrupt overhead caused by incoming traffic. In particular, excessive latency and significant degradation in system throughput can be experienced. Also, user applications may livelock as the CPU power is mostly consumed by interrupt handling and protocol processing. In this paper, we present and compare two analytical models that capture host behavior and evaluate its performance. The first model is based on Markov processes and queueing theory, while the second, which is more accurate but more complex, is a pure Markov process. For the most part both models give mathematically-equivalent closed-form solutions for a number of important system performance metrics. These metrics include throughput, latency, stability condition, CPU utilizations of interrupt handling and protocol processing, and CPU availability for user applications. The analysis yields insight into understanding and predicting the impact of system and network choices on the performance of interrupt-driven systems when subjected to light and heavy network loads. More importantly, our analytical work can also be valuable in improving host performance. The paper gives guidelines and recommendations to address design and implementation issues. Simulation and reported experimental results show that our analytical models are valid and give a good approximation
    corecore