167 research outputs found
In-Order Delivery Delay of Transport Layer Coding
A large number of streaming applications use reliable transport protocols
such as TCP to deliver content over the Internet. However, head-of-line
blocking due to packet loss recovery can often result in unwanted behavior and
poor application layer performance. Transport layer coding can help mitigate
this issue by helping to recover from lost packets without waiting for
retransmissions. We consider the use of an on-line network code that inserts
coded packets at strategic locations within the underlying packet stream. If
retransmissions are necessary, additional coding packets are transmitted to
ensure the receiver's ability to decode. An analysis of this scheme is provided
that helps determine both the expected in-order packet delivery delay and its
variance. Numerical results are then used to determine when and how many coded
packets should be inserted into the packet stream, in addition to determining
the trade-offs between reducing the in-order delay and the achievable rate. The
analytical results are finally compared with experimental results to provide
insight into how to minimize the delay of existing transport layer protocols
Network Coding Over SATCOM: Lessons Learned
Satellite networks provide unique challenges that can restrict users' quality
of service. For example, high packet erasure rates and large latencies can
cause significant disruptions to applications such as video streaming or
voice-over-IP. Network coding is one promising technique that has been shown to
help improve performance, especially in these environments. However,
implementing any form of network code can be challenging. This paper will use
an example of a generation-based network code and a sliding-window network code
to help highlight the benefits and drawbacks of using one over the other.
In-order packet delivery delay, as well as network efficiency, will be used as
metrics to help differentiate between the two approaches. Furthermore, lessoned
learned during the course of our research will be provided in an attempt to
help the reader understand when and where network coding provides its benefits.Comment: Accepted to WiSATS 201
Performance analysis of CCSDS path service
A communications service, called Path Service, is currently being developed by the Consultative Committee for Space Data Systems (CCSDS) to provide a mechanism for the efficient transmission of telemetry data from space to ground for complex space missions of the future. This is an important service, due to the large volumes of telemetry data that will be generated during these missions. A preliminary analysis of performance of Path Service is presented with respect to protocol-processing requirements and channel utilization
Delay distributions of slotted ALOHA and CSMA
We derive the closed-form delay distributions of slotted ALOHA and nonpersistent carrier sense multiple access (CSMA) protocols under steady state. Three retransmission policies are analyzed. We find that under a binary exponential backoff retransmission policy, finite average delay and finite delay variance can be guaranteed for G<2S and G<4S/3, respectively, where G is the channel traffic and S is the channel throughput. As an example, in slotted ALOHA, S<(ln2)/2 and S<3(ln4-ln3)/4 are the operating ranges for finite first and second delay moments. In addition, the blocking probability and delay performance as a function of r/sub max/ (maximum number of retransmissions allowed) is also derived
Communication Service Requirements For Distributed Interactive Simulation: Part 1, Application Service Characterization, Invesitgation Of OSI Protocols For Distributed Interactive Simulation
Report discusses the application service requirements of the communication subsystem for the Communication architecture for distributed interactive simulation (CADIS)
Scalable parallel communications
Coarse-grain parallelism in networking (that is, the use of multiple protocol processors running replicated software sending over several physical channels) can be used to provide gigabit communications for a single application. Since parallel network performance is highly dependent on real issues such as hardware properties (e.g., memory speeds and cache hit rates), operating system overhead (e.g., interrupt handling), and protocol performance (e.g., effect of timeouts), we have performed detailed simulations studies of both a bus-based multiprocessor workstation node (based on the Sun Galaxy MP multiprocessor) and a distributed-memory parallel computer node (based on the Touchstone DELTA) to evaluate the behavior of coarse-grain parallelism. Our results indicate: (1) coarse-grain parallelism can deliver multiple 100 Mbps with currently available hardware platforms and existing networking protocols (such as Transmission Control Protocol/Internet Protocol (TCP/IP) and parallel Fiber Distributed Data Interface (FDDI) rings); (2) scale-up is near linear in n, the number of protocol processors, and channels (for small n and up to a few hundred Mbps); and (3) since these results are based on existing hardware without specialized devices (except perhaps for some simple modifications of the FDDI boards), this is a low cost solution to providing multiple 100 Mbps on current machines. In addition, from both the performance analysis and the properties of these architectures, we conclude: (1) multiple processors providing identical services and the use of space division multiplexing for the physical channels can provide better reliability than monolithic approaches (it also provides graceful degradation and low-cost load balancing); (2) coarse-grain parallelism supports running several transport protocols in parallel to provide different types of service (for example, one TCP handles small messages for many users, other TCP's running in parallel provide high bandwidth service to a single application); and (3) coarse grain parallelism will be able to incorporate many future improvements from related work (e.g., reduced data movement, fast TCP, fine-grain parallelism) also with near linear speed-ups
Recommended from our members
Collision Avoidance Tree networks
The Collision Avoidance Tree is a new local area network based on a hardware device called collision avoidance switch, which arbitrates random access to a shared communications channel. Collision Avoidance Tree combines the benefits of random access (low delay when traffic is light; simple, distributed, and therefore robust, protocols) with concurrency of transmission, excellent network utilization and suitability for the domain of high-speed, optical networking.The Collision Avoidance Tree is classified in two classes: the Collision Avoidance Single Broadcast (CASB) Tree and the Collision Avoidance Multiple Broadcast (CAMB) Tree. The CASB Tree allows only a single transmission on the network at a given time, while the CAMB Tree is more general and allows concurrent transmissions on the network.This paper describes network architectures (e.g., station and switch protocols) and designs and implementations of the CASB and CAMB Trees. Performance results derived from analyses, simulations, measurements of experimental networks are also presented
- …