31,087 research outputs found

    Approximating the Held-Karp Bound for Metric TSP in Nearly Linear Time

    Full text link
    We give a nearly linear time randomized approximation scheme for the Held-Karp bound [Held and Karp, 1970] for metric TSP. Formally, given an undirected edge-weighted graph GG on mm edges and ϵ>0\epsilon > 0, the algorithm outputs in O(mlog4n/ϵ2)O(m \log^4n /\epsilon^2) time, with high probability, a (1+ϵ)(1+\epsilon)-approximation to the Held-Karp bound on the metric TSP instance induced by the shortest path metric on GG. The algorithm can also be used to output a corresponding solution to the Subtour Elimination LP. We substantially improve upon the O(m2log2(m)/ϵ2)O(m^2 \log^2(m)/\epsilon^2) running time achieved previously by Garg and Khandekar. The LP solution can be used to obtain a fast randomized (32+ϵ)\big(\frac{3}{2} + \epsilon\big)-approximation for metric TSP which improves upon the running time of previous implementations of Christofides' algorithm

    Distributed Tree Kernels

    Get PDF
    In this paper, we propose the distributed tree kernels (DTK) as a novel method to reduce time and space complexity of tree kernels. Using a linear complexity algorithm to compute vectors for trees, we embed feature spaces of tree fragments in low-dimensional spaces where the kernel computation is directly done with dot product. We show that DTKs are faster, correlate with tree kernels, and obtain a statistically similar performance in two natural language processing tasks.Comment: ICML201

    Feature Reinforcement Learning: Part I: Unstructured MDPs

    Get PDF
    General-purpose, intelligent, learning agents cycle through sequences of observations, actions, and rewards that are complex, uncertain, unknown, and non-Markovian. On the other hand, reinforcement learning is well-developed for small finite state Markov decision processes (MDPs). Up to now, extracting the right state representations out of bare observations, that is, reducing the general agent setup to the MDP framework, is an art that involves significant effort by designers. The primary goal of this work is to automate the reduction process and thereby significantly expand the scope of many existing reinforcement learning algorithms and the agents that employ them. Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in Part II. The role of POMDPs is also considered there.Comment: 24 LaTeX pages, 5 diagram
    corecore