52 research outputs found
Transport layer protocols and architectures for satellite networks
Designing efficient transmission mechanisms for advanced satellite networks is a demanding task, requiring the definition and the implementation of protocols and architectures well suited to this challenging environment. In particular, transport protocols performance over satellite networks is impaired by the characteristics of the satellite radio link, specifically by the long propagation delay and the possible presence of segment losses due to physical channel errors. The level of impact on performance depends upon the link design (type of constellation, link margin, coding and modulation) and operational conditions (link obstructions, terminal mobility, weather conditions, etc.). To address these critical aspects a number of possible solutions have been presented in the literature, ranging from limited modifications of standard protocols (e.g. TCP, transmission control protocol) to completely alternative protocol and network architectures. However, despite the great number of different proposals (or perhaps also because of it), the general framework appears quite fragmented and there is a compelling need of an integration of the research competences and efforts. This is actually the intent of the transport protocols research line within the European SatNEx (Satellite Network of Excellence) project. Stemming from the authors' work on this project, this paper aims to provide the reader with an updated overview of all the possible approaches that can be pursued to overcome the limitations of current transport protocols and architectures, when applied to satellite communications. In the paper the possible solutions are classified in the following categories: optimization of TCP interactions with lower layers, TCP enhancements, performance enhancement proxies (PEP) and delay tolerant networks (DTN). Advantages and disadvantages of the different approaches, as well as their interactions, are investigated and discussed, taking into account performance improvement, complexity, and compliance to the standard semantics. From this analysis, it emerges that DTN architectures could integrate some of the most efficient solutions from the other categories, by inserting them in a new rigorous framework. These innovative architectures therefore may represent a promising solution for solving some of the important problems posed at the transport layer by satellite networks, at least in a medium-to-long-term perspective. Copyright (c) 2006 John Wiley & Sons, Ltd
Networking with multi-service GEO satellites: cross-layer approaches for bandwidth allocation
Cross-layer Radio Resource Management (CL-RRM) has been recently investigated by quite a few
research groups in wireless communications. In the specific satellite-networking environment, the paper
presents an overview of different CL-RRM techniques devoted to dynamic bandwidth allocation, whose
interactions span the physical, data link, network and transport layers, in various combinations. A multiservice
setting is considered, in the presence of variations in both traffic and channel conditions. Regarding
the latter, bit and coding rate adaptation are adopted as fade countermeasure, and their effect on the higher
layers is modelled as a bandwidth reduction. Traffic models and methodologies for dynamic bandwidth
allocation and performance optimization are discussed. Numerical examples are presented to highlight
throughput/fairness trade-offs for long-lived TCP connections that share multiple channels with different
fading depth
TCP performance in a hybrid satellite network by using ACM and ARQ
The TCP efficiency degradation in high delay-bandwidth product and error-prone channels is a well-known problem. To reduce this degradation, we utilized a hybrid wireless network architecture, where a geostationary bent-pipe satellite channel is used for forward high bit-rate transmissions, while the return link is realized through a terrestrial 3G segment. The performance of four of the most popular TCP versions are tested and compared via simulations in terms of goodput. Our aim is choosing which version performs better by using an Adaptive Coding and Modulation (ACM) technique, both alone and together with an Automatic Repeat reQuest (ARQ) scheme of the Selective Repeat (SR) type. ARQ is used with and without transmit and receive timeouts. The obtained results show that such a hybrid architecture is advantageous for TCP transmissions in terms of average goodput, and that ACM is effective only if used together with ARQ schemes
Adaptive cross-layer bandwidth allocation in a rain-faded satellite environment
Two control schemes, based on cross-layer adaptation and a hierarchical parametric optimization of the
bandwidth allocation, are described and investigated in a satellite network environment, in the presence of
both real-time and best-effort traffic flows. A number of earth stations (traffic stations) operate in different
weather conditions, with different levels of fade, which affect the transmitted signals. The call admission
control policy for real-time connections is administered locally at the traffic stations. A master station is
charged to manage the time division multiple access bandwidth allocation policy, by defining bandwidth
partitions to the traffic stations. Upon detecting significant fade changes, the signalling from the traffic
stations triggers new bandwidth redistributions. The control schemes are compared, and the effect of fade
countermeasures, applied at the physical layer, on the bandwidth occupation is explicitly accounted for.
For each policy, figures of merit such as loss, blocking and dropping probabilities are computed for a
specific real environment, based on the Italsat satellite national coverage payload characteristics
- …