10,657 research outputs found
On the Interaction between TCP and the Wireless Channel in CDMA2000 Networks
In this work, we conducted extensive active measurements on a large nationwide CDMA2000 1xRTT network in order to characterize the impact of both the Radio Link Protocol and more importantly, the wireless scheduler, on TCP. Our measurements include standard TCP/UDP logs, as well as detailed RF layer statistics that allow observability into RF dynamics. With the help of a robust correlation measure, normalized mutual information, we were able to quantify the impact of these two RF factors on TCP performance metrics such as the round trip time, packet loss rate, instantaneous throughput etc. We show that the variable channel rate has the larger impact on TCP behavior when compared to the Radio Link Protocol. Furthermore, we expose and rank the factors that influence the assigned channel rate itself and in particular, demonstrate the sensitivity of the wireless scheduler to the data sending rate. Thus, TCP is adapting its rate to match the available network capacity, while the rate allocated by the wireless scheduler is influenced by the sender's behavior. Such a system is best described as a closed loop system with two feedback controllers, the TCP controller and the wireless scheduler, each one affecting the other's decisions. In this work, we take the first steps in characterizing such a system in a realistic environment
TCP over CDMA2000 Networks: A Cross-Layer Measurement Study
Modern cellular channels in 3G networks incorporate sophisticated power control and dynamic rate adaptation which can have significant impact on adaptive transport layer protocols, such as TCP. Though there exists studies that have evaluated the performance of TCP over such networks, they are based solely on observations at the transport layer and hence have no visibility into the impact of lower layer dynamics, which are a key characteristic of these networks. In this work, we present a detailed characterization of TCP behavior based on cross-layer measurement of transport layer, as well as RF and MAC layer parameters. In particular, through a series of active TCP/UDP experiments and measurement of the relevant variables at all three layers, we characterize both, the wireless scheduler and the radio link protocol in a commercial CDMA2000 network and assess their impact on TCP dynamics. Somewhat surprisingly, our findings indicate that the wireless scheduler is mostly insensitive to channel quality and sector load over short timescales and is mainly affected by the transport layer data rate. Furthermore, with the help of a robust correlation measure, Normalized Mutual Information, we were able to quantify the impact of the wireless scheduler and the radio link protocol on various TCP parameters such as the round trip time, throughput and packet loss rate
Improvements in DCCP congestion control for satellite links
We propose modifications in the TCP-Friendly Rate Control (TFRC) congestion control mechanism from the Datagram Congestion Control Protocol (DCCP) intended for use with real-time traffic, which are aimed at improving its performance for long delay (primarily satellite) links. Firstly, we propose an algorithm to optimise the number of feedback messages per round trip time (RTT) rather than use the currently standard of at least one per RTT, based on the observed link delay. We analyse the improvements achievable with proposed modification in different phases of congestion control and present results from simulations with modified ns-2 DCCP and live experiments using the modified DCCP Linux kernel implementation. We demonstrate that the changes results in improved slow start performance and a reduced data loss compared to standard DCCP, while the introduced overhead remains acceptable
Non-blind watermarking of network flows
Linking network flows is an important problem in intrusion detection as well
as anonymity. Passive traffic analysis can link flows but requires long periods
of observation to reduce errors. Active traffic analysis, also known as flow
watermarking, allows for better precision and is more scalable. Previous flow
watermarks introduce significant delays to the traffic flow as a side effect of
using a blind detection scheme; this enables attacks that detect and remove the
watermark, while at the same time slowing down legitimate traffic. We propose
the first non-blind approach for flow watermarking, called RAINBOW, that
improves watermark invisibility by inserting delays hundreds of times smaller
than previous blind watermarks, hence reduces the watermark interference on
network flows. We derive and analyze the optimum detectors for RAINBOW as well
as the passive traffic analysis under different traffic models by using
hypothesis testing. Comparing the detection performance of RAINBOW and the
passive approach we observe that both RAINBOW and passive traffic analysis
perform similarly good in the case of uncorrelated traffic, however, the
RAINBOW detector drastically outperforms the optimum passive detector in the
case of correlated network flows. This justifies the use of non-blind
watermarks over passive traffic analysis even though both approaches have
similar scalability constraints. We confirm our analysis by simulating the
detectors and testing them against large traces of real network flows
Modeling Network Coded TCP Throughput: A Simple Model and its Validation
We analyze the performance of TCP and TCP with network coding (TCP/NC) in
lossy wireless networks. We build upon the simple framework introduced by
Padhye et al. and characterize the throughput behavior of classical TCP as well
as TCP/NC as a function of erasure rate, round-trip time, maximum window size,
and duration of the connection. Our analytical results show that network coding
masks erasures and losses from TCP, thus preventing TCP's performance
degradation in lossy networks, such as wireless networks. It is further seen
that TCP/NC has significant throughput gains over TCP. In addition, we simulate
TCP and TCP/NC to verify our analysis of the average throughput and the window
evolution. Our analysis and simulation results show very close concordance and
support that TCP/NC is robust against erasures. TCP/NC is not only able to
increase its window size faster but also to maintain a large window size
despite losses within the network, whereas TCP experiences window closing
essentially because losses are mistakenly attributed to congestion.Comment: 9 pages, 12 figures, 1 table, submitted to IEEE INFOCOM 201
DiFX2: A more flexible, efficient, robust and powerful software correlator
Software correlation, where a correlation algorithm written in a high-level
language such as C++ is run on commodity computer hardware, has become
increasingly attractive for small to medium sized and/or bandwidth constrained
radio interferometers. In particular, many long baseline arrays (which
typically have fewer than 20 elements and are restricted in observing bandwidth
by costly recording hardware and media) have utilized software correlators for
rapid, cost-effective correlator upgrades to allow compatibility with new,
wider bandwidth recording systems and improve correlator flexibility. The DiFX
correlator, made publicly available in 2007, has been a popular choice in such
upgrades and is now used for production correlation by a number of
observatories and research groups worldwide. Here we describe the evolution in
the capabilities of the DiFX correlator over the past three years, including a
number of new capabilities, substantial performance improvements, and a large
amount of supporting infrastructure to ease use of the code. New capabilities
include the ability to correlate a large number of phase centers in a single
correlation pass, the extraction of phase calibration tones, correlation of
disparate but overlapping sub-bands, the production of rapidly sampled
filterbank and kurtosis data at minimal cost, and many more. The latest version
of the code is at least 15% faster than the original, and in certain situations
many times this value. Finally, we also present detailed test results
validating the correctness of the new code.Comment: 28 pages, 9 figures, accepted for publication in PAS
End-to-End Simulation of 5G mmWave Networks
Due to its potential for multi-gigabit and low latency wireless links,
millimeter wave (mmWave) technology is expected to play a central role in 5th
generation cellular systems. While there has been considerable progress in
understanding the mmWave physical layer, innovations will be required at all
layers of the protocol stack, in both the access and the core network.
Discrete-event network simulation is essential for end-to-end, cross-layer
research and development. This paper provides a tutorial on a recently
developed full-stack mmWave module integrated into the widely used open-source
ns--3 simulator. The module includes a number of detailed statistical channel
models as well as the ability to incorporate real measurements or ray-tracing
data. The Physical (PHY) and Medium Access Control (MAC) layers are modular and
highly customizable, making it easy to integrate algorithms or compare
Orthogonal Frequency Division Multiplexing (OFDM) numerologies, for example.
The module is interfaced with the core network of the ns--3 Long Term Evolution
(LTE) module for full-stack simulations of end-to-end connectivity, and
advanced architectural features, such as dual-connectivity, are also available.
To facilitate the understanding of the module, and verify its correct
functioning, we provide several examples that show the performance of the
custom mmWave stack as well as custom congestion control algorithms designed
specifically for efficient utilization of the mmWave channel.Comment: 25 pages, 16 figures, submitted to IEEE Communications Surveys and
Tutorials (revised Jan. 2018
- …