10 research outputs found

    Active Learning of Multiple Source Multiple Destination Topologies

    Get PDF
    We consider the problem of inferring the topology of a network with MM sources and NN receivers (hereafter referred to as an MM-by-NN network), by sending probes between the sources and receivers. Prior work has shown that this problem can be decomposed into two parts: first, infer smaller subnetwork components (i.e., 11-by-NN's or 22-by-22's) and then merge these components to identify the MM-by-NN topology. In this paper, we focus on the second part, which had previously received less attention in the literature. In particular, we assume that a 11-by-NN topology is given and that all 22-by-22 components can be queried and learned using end-to-end probes. The problem is which 22-by-22's to query and how to merge them with the given 11-by-NN, so as to exactly identify the 22-by-NN topology, and optimize a number of performance metrics, including the number of queries (which directly translates into measurement bandwidth), time complexity, and memory usage. We provide a lower bound, N2\lceil \frac{N}{2} \rceil, on the number of 22-by-22's required by any active learning algorithm and propose two greedy algorithms. The first algorithm follows the framework of multiple hypothesis testing, in particular Generalized Binary Search (GBS), since our problem is one of active learning, from 22-by-22 queries. The second algorithm is called the Receiver Elimination Algorithm (REA) and follows a bottom-up approach: at every step, it selects two receivers, queries the corresponding 22-by-22, and merges it with the given 11-by-NN; it requires exactly N1N-1 steps, which is much less than all (N2)\binom{N}{2} possible 22-by-22's. Simulation results over synthetic and realistic topologies demonstrate that both algorithms correctly identify the 22-by-NN topology and are near-optimal, but REA is more efficient in practice

    Maximum Likelihood Estimation for Linear Gaussian Covariance Models

    Full text link
    We study parameter estimation in linear Gaussian covariance models, which are pp-dimensional Gaussian models with linear constraints on the covariance matrix. Maximum likelihood estimation for this class of models leads to a non-convex optimization problem which typically has many local maxima. Using recent results on the asymptotic distribution of extreme eigenvalues of the Wishart distribution, we provide sufficient conditions for any hill-climbing method to converge to the global maximum. Although we are primarily interested in the case in which n> ⁣ ⁣>pn>\!\!>p, the proofs of our results utilize large-sample asymptotic theory under the scheme n/pγ>1n/p \to \gamma > 1. Remarkably, our numerical simulations indicate that our results remain valid for pp as small as 22. An important consequence of this analysis is that for sample sizes n14pn \simeq 14 p, maximum likelihood estimation for linear Gaussian covariance models behaves as if it were a convex optimization problem

    Latent tree models

    Full text link
    Latent tree models are graphical models defined on trees, in which only a subset of variables is observed. They were first discussed by Judea Pearl as tree-decomposable distributions to generalise star-decomposable distributions such as the latent class model. Latent tree models, or their submodels, are widely used in: phylogenetic analysis, network tomography, computer vision, causal modeling, and data clustering. They also contain other well-known classes of models like hidden Markov models, Brownian motion tree model, the Ising model on a tree, and many popular models used in phylogenetics. This article offers a concise introduction to the theory of latent tree models. We emphasise the role of tree metrics in the structural description of this model class, in designing learning algorithms, and in understanding fundamental limits of what and when can be learned

    Link Delay Estimation via Expander Graphs

    Full text link
    One of the purposes of network tomography is to infer the status of parameters (e.g., delay) for the links inside a network through end-to-end probing between (external) boundary nodes along predetermined routes. In this work, we apply concepts from compressed sensing and expander graphs to the delay estimation problem. We first show that a relative majority of network topologies are not expanders for existing expansion criteria. Motivated by this challenge, we then relax such criteria, enabling us to acquire simulation evidence that link delays can be estimated for 30% more networks. That is, our relaxation expands the list of identifiable networks with bounded estimation error by 30%. We conduct a simulation performance analysis of delay estimation and congestion detection on the basis of l1 minimization, demonstrating that accurate estimation is feasible for an increasing proportion of networks

    Active Topology Inference using Network Coding

    Get PDF
    Our goal is to infer the topology of a network when (i) we can send probes between sources and receivers at the edge of the network and (ii) intermediate nodes can perform simple network coding operations, i.e., additions. Our key intuition is that network coding introduces topology-dependent correlation in the observations at the receivers, which can be exploited to infer the topology. For undirected tree topologies, we design hierarchical clustering algorithms, building on our prior work. For directed acyclic graphs (DAGs), first we decompose the topology into a number of two-source, two-receiver (2-by-2) subnetwork components and then we merge these components to reconstruct the topology. Our approach for DAGs builds on prior work on tomography, and improves upon it by employing network coding to accurately distinguish among all different 2-by-2 components. We evaluate our algorithms through simulation of a number of realistic topologies and compare them to active tomographic techniques without network coding. We also make connections between our approach and alternatives, including passive inference, traceroute, and packet marking

    EM's Convergence in Gaussian Latent Tree Models

    Full text link
    We study the optimization landscape of the log-likelihood function and the convergence of the Expectation-Maximization (EM) algorithm in latent Gaussian tree models, i.e. tree-structured Gaussian graphical models whose leaf nodes are observable and non-leaf nodes are unobservable. We show that the unique non-trivial stationary point of the population log-likelihood is its global maximum, and establish that the expectation-maximization algorithm is guaranteed to converge to it in the single latent variable case. Our results for the landscape of the log-likelihood function in general latent tree models provide support for the extensive practical use of maximum likelihood based-methods in this setting. Our results for the EM algorithm extend an emerging line of work on obtaining global convergence guarantees for this celebrated algorithm. We show our results for the non-trivial stationary points of the log-likelihood by arguing that a certain system of polynomial equations obtained from the EM updates has a unique non-trivial solution. The global convergence of the EM algorithm follows by arguing that all trivial fixed points are higher-order saddle points

    Active topology inference using network coding

    Get PDF
    Our goal, in this paper, is to infer the topology of a network when (i) we can send probes between sources and receivers at the edge of the network and (ii) intermediate nodes can perform simple network coding operations, i.e., additions. Our key intuition is that network coding introduces topology-dependent correlation in the observations at the receivers, which can be exploited to infer the topology. For undirected tree topologies, we design hierarchical clustering algorithms, building on our prior work in [24]. For directed acyclic graphs (DAGs), first we decompose the topology into a number of two source, two receiver (2-by-2) subnetwork components and then we merge these components to reconstruct the topology. Our approach for DAGs builds on prior work on tomography [36], and improves upon it by employing network coding to accurately distinguish among all different 2-by-2 components. We evaluate our algorithms through simulation of a number of realistic topologies and compare them to active tomographic techniques without network coding. We also make connections between our approach and other alternatives, including passive inference, traceroute, and packet marking
    corecore