3,376 research outputs found
An Analysis on End-To-End Inference Methods based On Packet Probing in Network
Current Internet is a massive, distributed network which continues to grow in size as globalization takes major role in everyone’s life like e-commerce, social networking and related activities grow. The heterogeneous and largely unregulated structure of the Internet renders tasks such as optimized service provision, rate limiting certain classes of applications (e.g. peer-to-peer), provide bandwidth guarantee for certain applications, avoiding shared congestion in flows are increasingly challenging tasks. The problem is complicated by the fact that one cannot rely on the cooperation of individual servers and routers to aid in the collection of network traffic measurements vital for these tasks. Hence we go for network monitoring and inference methods based on packet probing in the network. This paper presents an analysis of different inference methods for network characteristics to deal with shared congestion, packet forwarding priority, network tomography and evaluates each methodology based on packet loss rate and delay variance
Active Topology Inference using Network Coding
Our goal is to infer the topology of a network when (i) we can send probes
between sources and receivers at the edge of the network and (ii) intermediate
nodes can perform simple network coding operations, i.e., additions. Our key
intuition is that network coding introduces topology-dependent correlation in
the observations at the receivers, which can be exploited to infer the
topology. For undirected tree topologies, we design hierarchical clustering
algorithms, building on our prior work. For directed acyclic graphs (DAGs),
first we decompose the topology into a number of two-source, two-receiver
(2-by-2) subnetwork components and then we merge these components to
reconstruct the topology. Our approach for DAGs builds on prior work on
tomography, and improves upon it by employing network coding to accurately
distinguish among all different 2-by-2 components. We evaluate our algorithms
through simulation of a number of realistic topologies and compare them to
active tomographic techniques without network coding. We also make connections
between our approach and alternatives, including passive inference, traceroute,
and packet marking
Delay estimation in computer networks
Computer networks are becoming increasingly large and complex; more so with the recent
penetration of the internet into all walks of life. It is essential to be able to monitor and
to analyse networks in a timely and efficient manner; to extract important metrics and
measurements and to do so in a way which does not unduly disturb or affect the performance
of the network under test. Network tomography is one possible method to accomplish these
aims. Drawing upon the principles of statistical inference, it is often possible to determine
the statistical properties of either the links or the paths of the network, whichever is desired,
by measuring at the most convenient points thus reducing the effort required. In particular,
bottleneck-link detection methods in which estimates of the delay distributions on network
links are inferred from measurements made at end-points on network paths, are examined as a
means to determine which links of the network are experiencing the highest delay.
Initially two published methods, one based upon a single Gaussian distribution and the other
based upon the method-of-moments, are examined by comparing their performance using three
metrics: robustness to scaling, bottleneck detection accuracy and computational complexity.
Whilst there are many published algorithms, there is little literature in which said algorithms
are objectively compared. In this thesis, two network topologies are considered, each with
three configurations in order to determine performance in six scenarios. Two new estimation
methods are then introduced, both based on Gaussian mixture models which are believed to
offer an advantage over existing methods in certain scenarios. Computationally, a mixture
model algorithm is much more complex than a simple parametric algorithm but the flexibility
in modelling an arbitrary distribution is vastly increased. Better model accuracy potentially
leads to more accurate estimation and detection of the bottleneck.
The concept of increasing flexibility is again considered by using a Pearson type-1 distribution
as an alternative to the single Gaussian distribution. This increases the flexibility but with
a reduced complexity when compared with mixture model approaches which necessitate the
use of iterative approximation methods. A hybrid approach is also considered where the
method-of-moments is combined with the Pearson type-1 method in order to circumvent
problems with the output stage of the former. This algorithm has a higher variance than
the method-of-moments but the output stage is more convenient for manipulation. Also
considered is a new approach to detection algorithms which is not dependant on any a-priori
parameter selection and makes use of the Kullback-Leibler divergence. The results show that it
accomplishes its aim but is not robust enough to replace the current algorithms.
Delay estimation is then cast in a different role, as an integral part of an algorithm to correlate
input and output streams in an anonymising network such as the onion router (TOR). TOR
is used by users in an attempt to conceal network traffic from observation. Breaking the
encryption protocols used is not possible without significant effort but by correlating the
un-encrypted input and output streams from the TOR network, it is possible to provide a degree
of certainty about the ownership of traffic streams. The delay model is essential as the network
is treated as providing a pseudo-random delay to each packet; having an accurate model allows
the algorithm to better correlate the streams
Active Learning of Multiple Source Multiple Destination Topologies
We consider the problem of inferring the topology of a network with
sources and receivers (hereafter referred to as an -by- network), by
sending probes between the sources and receivers. Prior work has shown that
this problem can be decomposed into two parts: first, infer smaller subnetwork
components (i.e., -by-'s or -by-'s) and then merge these components
to identify the -by- topology. In this paper, we focus on the second
part, which had previously received less attention in the literature. In
particular, we assume that a -by- topology is given and that all
-by- components can be queried and learned using end-to-end probes. The
problem is which -by-'s to query and how to merge them with the given
-by-, so as to exactly identify the -by- topology, and optimize a
number of performance metrics, including the number of queries (which directly
translates into measurement bandwidth), time complexity, and memory usage. We
provide a lower bound, , on the number of
-by-'s required by any active learning algorithm and propose two greedy
algorithms. The first algorithm follows the framework of multiple hypothesis
testing, in particular Generalized Binary Search (GBS), since our problem is
one of active learning, from -by- queries. The second algorithm is called
the Receiver Elimination Algorithm (REA) and follows a bottom-up approach: at
every step, it selects two receivers, queries the corresponding -by-, and
merges it with the given -by-; it requires exactly steps, which is
much less than all possible -by-'s. Simulation results
over synthetic and realistic topologies demonstrate that both algorithms
correctly identify the -by- topology and are near-optimal, but REA is
more efficient in practice
Challenges in imaging and predictive modeling of rhizosphere processes
Background Plant-soil interaction is central to human food production and ecosystem function. Thus, it is essential to not only understand, but also to develop predictive mathematical models which can be used to assess how climate and soil management practices will affect these interactions. Scope In this paper we review the current developments in structural and chemical imaging of rhizosphere processes within the context of multiscale mathematical image based modeling. We outline areas that need more research and areas which would benefit from more detailed understanding. Conclusions We conclude that the combination of structural and chemical imaging with modeling is an incredibly powerful tool which is fundamental for understanding how plant roots interact with soil. We emphasize the need for more researchers to be attracted to this area that is so fertile for future discoveries. Finally, model building must go hand in hand with experiments. In particular, there is a real need to integrate rhizosphere structural and chemical imaging with modeling for better understanding of the rhizosphere processes leading to models which explicitly account for pore scale processes
- …