205,024 research outputs found

    On Network Coding Capacity - Matroidal Networks and Network Capacity Regions

    Get PDF
    One fundamental problem in the field of network coding is to determine the network coding capacity of networks under various network coding schemes. In this thesis, we address the problem with two approaches: matroidal networks and capacity regions. In our matroidal approach, we prove the converse of the theorem which states that, if a network is scalar-linearly solvable then it is a matroidal network associated with a representable matroid over a finite field. As a consequence, we obtain a correspondence between scalar-linearly solvable networks and representable matroids over finite fields in the framework of matroidal networks. We prove a theorem about the scalar-linear solvability of networks and field characteristics. We provide a method for generating scalar-linearly solvable networks that are potentially different from the networks that we already know are scalar-linearly solvable. In our capacity region approach, we define a multi-dimensional object, called the network capacity region, associated with networks that is analogous to the rate regions in information theory. For the network routing capacity region, we show that the region is a computable rational polytope and provide exact algorithms and approximation heuristics for computing the region. For the network linear coding capacity region, we construct a computable rational polytope, with respect to a given finite field, that inner bounds the linear coding capacity region and provide exact algorithms and approximation heuristics for computing the polytope. The exact algorithms and approximation heuristics we present are not polynomial time schemes and may depend on the output size.Comment: Master of Engineering Thesis, MIT, September 2010, 70 pages, 10 figure

    Searching for Realizations of Finite Metric Spaces in Tight Spans

    Full text link
    An important problem that commonly arises in areas such as internet traffic-flow analysis, phylogenetics and electrical circuit design, is to find a representation of any given metric DD on a finite set by an edge-weighted graph, such that the total edge length of the graph is minimum over all such graphs. Such a graph is called an optimal realization and finding such realizations is known to be NP-hard. Recently Varone presented a heuristic greedy algorithm for computing optimal realizations. Here we present an alternative heuristic that exploits the relationship between realizations of the metric DD and its so-called tight span TDT_D. The tight span TDT_D is a canonical polytopal complex that can be associated to DD, and our approach explores parts of TDT_D for realizations in a way that is similar to the classical simplex algorithm. We also provide computational results illustrating the performance of our approach for different types of metrics, including l1l_1-distances and two-decomposable metrics for which it is provably possible to find optimal realizations in their tight spans.Comment: 20 pages, 3 figure

    Applying MDL to Learning Best Model Granularity

    Get PDF
    The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a two-part code of the data set: this embodies ``Occam's Razor.'' In two quite different experimental settings the theoretical value determined using MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and orientation. Based on a new modification of elastic matching, using multiple prototypes per character, the optimal prediction rate is predicted for the learned parameter (length of sampling interval) considered most likely by MDL, which is shown to coincide with the best value found experimentally. In the second experiment the task is to model a robot arm with two degrees of freedom using a three layer feed-forward neural network where we need to determine the number of nodes in the hidden layer giving best modeling performance. The optimal model (the one that extrapolizes best on unseen examples) is predicted for the number of nodes in the hidden layer considered most likely by MDL, which again is found to coincide with the best value found experimentally.Comment: LaTeX, 32 pages, 5 figures. Artificial Intelligence journal, To appea

    Bidirected minimum Manhattan network problem

    Full text link
    In the bidirected minimum Manhattan network problem, given a set T of n terminals in the plane, we need to construct a network N(T) of minimum total length with the property that the edges of N(T) are axis-parallel and oriented in a such a way that every ordered pair of terminals is connected in N(T) by a directed Manhattan path. In this paper, we present a polynomial factor 2 approximation algorithm for the bidirected minimum Manhattan network problem.Comment: 14 pages, 16 figure
    • …
    corecore