6 research outputs found

    On Reliable Transmission of Data over Simple Wireless Channels

    Get PDF
    Standard protocols for reliable data transmission over unreliable channels are based on various Automatic Repeat reQuest (ARQ) schemes, whereby the sending node receives feedback from the receiver and retransmits the missing data. We discuss this issue in the context of one-way data transmission over simple wireless channels characteristic of many sensing and monitoring applications. Using a specific project as an example, we demonstrate how the constraints of a low-cost embedded wireless system get in the way of a workable solution precluding the use of popular schemes based on windows and periodic acknowledgments. We also propose an efficient solution to the problem and demonstrate its advantage over the traditional protocols

    A review of experiences with reliable multicast

    Get PDF

    Memory-manager/Scheduler Co-design: Optimizing Event-driven Programs to Improve Cache Behavior

    Get PDF
    International audienceEvent-driven programming has emerged as a standard to implement high-performance servers due to its flexibility and low OS overhead. Still, memory access remains a bottleneck. Generic optimization techniques yield only small improvements in the memory access behavior of event-driven servers, as such techniques do not exploit their specific structure and behavior. This paper presents an optimization framework dedicated to event-driven servers, based on a strategy to eliminate data-cache misses. We propose a novel memory manager combined with a tailored scheduling strategy to restrict the working data set of the program to a memory region mapped directly into the data cache. Our approach exploits the flexible scheduling and deterministic execution of event-driven servers. We have applied our framework to industry-standard web servers including TUX and thttpd, as well as to the Squid proxy server and the Cactus QoS framework. Testing TUX and thttpd using a standard HTTP benchmark tool shows that our optimizations applied to the TUX web server reduce L2 data cache misses under heavy load by up to 75% and increase the throughput of the server by up to 38%

    Avaliaçao de desempenho do protocolo IEEE 802.2-LLC no Kernel do Linux

    Get PDF
    Orientador: Roberto A.HexselInclui apendiceDissertaçao (mestrado) - Universidade Federal do Paraná, Setor de Ciencias Exatas, Programa de Pós-Graduaçao em Informática. Defesa: Curitiba, 2006Inclui bibliografiaResumo: Para suprir a necessidade de poder computacional de aplicac¸ ?oes cient'ýficas utilizam-se aglomerados de computadores. A comunicac¸ ?ao das aplicac¸ ?oes nesses aglomerados 'e feita usando bibliotecas de troca de mensagens que utilizam normalmente os protocolos TCP/IP como meio de transporte. Restringindo a rede do aglomerado de computadores a uma rede local 'e poss'ývel substituir os protocolos TCP/IP pelo protocolo LLC, com ganho de desempenho. Este trabalho apresenta uma avaliac¸ ?ao de desempenho de uma modificac¸ ?ao da biblioteca OPENMPI para trabalhar com o LLC, comparando-a com a implementac¸ ?ao TCP/IP. Para a avaliac¸ ?ao de desempenho foram utilizados os aplicativos NETPIPE, MPPTEST, a Transformada R'apida de Fourier e a ordenac¸ ?ao Radix. Os resultados obtidos para o NETPIPE mostram que o LLC tem um desempenho de 16 a 21% superior ao TCP/IP; para o MPPTEST esse resultado varia na faixa de 3 a 12%, sendo que os maiores ganhos ocorrem para mensagens pequenas. Os resultados da Transformada R'apida de Fourier (FFT) mostram que o LLC 'e de 2.8 a 6.8% mais r'apido que o TCP/IP variando o n'umero de processadores de 2 a 16. O resultado da ordenac¸ ?ao Radix n?ao aponta ganho real para o LLC porque esse programa n?ao gera demanda significativa sobre o subsistema de comunicac¸ ?a

    Masking the overhead of protocol layering

    No full text
    Protocol layering has been advocated as a way of dealing with the complexity of computer communication. It has also been criticized for its performance overhead. In this paper, we present some insights in the design of protocols, and how these insights can be used to mask the overhead of layering, in a way similar to client caching in a le system. With our techniques, we achieve an order of magnitude improvement in end-to-end message latency in the Horus communication framework. Over an ATM network, we are able to do a round-trip message exchange, of varying levels of semantics, in about 170 seconds, using a protocol stack of four layers that were written in ML, a high-level functional language.

    Masking the overhead of protocol layering

    No full text
    corecore