1,478 research outputs found
Predicting expected TCP throughput using genetic algorithm
Predicting the expected throughput of TCP is important for several aspects such as e.g. determining handover criteria for future multihomed mobile nodes or determining the expected throughput of a given MPTCP subflow for load-balancing reasons. However, this is challenging due to time varying behavior of the underlying network characteristics. In this paper, we present a genetic-algorithm-based prediction model for estimating TCP throughput values. Our approach tries to find the best matching combination of mathematical functions that approximate a given time series that accounts for the TCP throughput samples using genetic algorithm. Based on collected historical datapoints about measured TCP throughput samples, our algorithm estimates expected throughput over time. We evaluate the quality of the prediction using different selection and diversity strategies for creating new chromosomes. Also, we explore the use of different fitness functions in order to evaluate the goodness of a chromosome. The goal is to show how different tuning on the genetic algorithm may have an impact on the prediction. Using extensive simulations over several TCP throughput traces, we find that the genetic algorithm successfully finds reasonable matching mathematical functions that allow to describe the TCP sampled throughput values with good fidelity. We also explore the effectiveness of predicting time series throughput samples for a given prediction horizon and estimate the prediction error and confidence.Peer ReviewedPostprint (author's final draft
Towards Data-driven Simulation of End-to-end Network Performance Indicators
Novel vehicular communication methods are mostly analyzed simulatively or
analytically as real world performance tests are highly time-consuming and
cost-intense. Moreover, the high number of uncontrollable effects makes it
practically impossible to reevaluate different approaches under the exact same
conditions. However, as these methods massively simplify the effects of the
radio environment and various cross-layer interdependencies, the results of
end-to-end indicators (e.g., the resulting data rate) often differ
significantly from real world measurements. In this paper, we present a
data-driven approach that exploits a combination of multiple machine learning
methods for modeling the end-to-end behavior of network performance indicators
within vehicular networks. The proposed approach can be exploited for fast and
close to reality evaluation and optimization of new methods in a controllable
environment as it implicitly considers cross-layer dependencies between
measurable features. Within an example case study for opportunistic vehicular
data transfer, the proposed approach is validated against real world
measurements and a classical system-level network simulation setup. Although
the proposed method does only require a fraction of the computation time of the
latter, it achieves a significantly better match with the real world
evaluations
A Routing Delay Predication Based on Packet Loss and Explicit Delay Acknowledgement for Congestion Control in MANET
In Mobile Ad hoc Networks congestion control and prevention are demanding because of network node mobility and dynamic topology. Congestion occurs primarily due to the large traffic volume in the case of data flow because the rate of inflow of data traffic is higher than the rate of data packets on the node. This alteration in sending rate results in routing delays and low throughput. The Rate control is a significant concern in streaming applications, especially in wireless networks. The TCP friendly rate control method is extensively recognized as a rate control mechanism for wired networks, which is effective in minimizing packet loss (PL) in the event of congestion. In this paper, we propose a routing delay prediction based on PL and Explicit Delay Acknowledgement (EDA) mechanism for data rate and congestion control in MANET to control data rate to minimize the loss of packets and improve the throughput. The experiment is performed over a reactive routing protocol to reduce the packet loss, jitter, and improvisation of throughput
Hierarchical Sparse Coding for Wireless Link Prediction in an Airborne Scenario
We build a data-driven hierarchical inference model to predict wireless link quality between a mobile unmanned aerial vehicle (UAV) and ground nodes. Clustering, sparse feature extraction, and non-linear pooling are combined to improve Support Vector Machine (SVM) classification when a limited training set does not comprehensively characterize data variations. Our approach first learns two layers of dictionaries by clustering packet reception data. These dictionaries are used to perform sparse feature extraction, which expresses link state vectors first in terms of a few prominent local patterns, or features, and then in terms of co-occurring features along the flight path. In order to tolerate artifacts like small positional shifts in field-collected data, we pool large magnitude features among overlapping shifted patches within windows. Together, these techniques transform raw link measurements into stable feature vectors that capture environmental effects driven by radio range limitations, antenna pattern variations, line-of-sight occlusions, etc. Link outage prediction is implemented by an SVM that assigns a common label to feature vectors immediately preceding gaps of successive packet losses, predictions are then fed to an adaptive link layer protocol that adjusts forward error correction rates, or queues packets during outages to prevent TCP timeout. In our harsh target environment, links are unstable and temporary outages common, so baseline TCP connections achieve only minimal throughput. However, connections under our predictive protocol temporarily hold packets that would otherwise be lost on unavailable links, and react quickly when the UAV link is restored, increasing overall channel utilization.Engineering and Applied Science
- …