17,540 research outputs found

    Efficient Nearest Neighbor Classification Using a Cascade of Approximate Similarity Measures

    Full text link
    Nearest neighbor classification using shape context can yield highly accurate results in a number of recognition problems. Unfortunately, the approach can be too slow for practical applications, and thus approximation strategies are needed to make shape context practical. This paper proposes a method for efficient and accurate nearest neighbor classification in non-Euclidean spaces, such as the space induced by the shape context measure. First, a method is introduced for constructing a Euclidean embedding that is optimized for nearest neighbor classification accuracy. Using that embedding, multiple approximations of the underlying non-Euclidean similarity measure are obtained, at different levels of accuracy and efficiency. The approximations are automatically combined to form a cascade classifier, which applies the slower approximations only to the hardest cases. Unlike typical cascade-of-classifiers approaches, that are applied to binary classification problems, our method constructs a cascade for a multiclass problem. Experiments with a standard shape data set indicate that a two-to-three order of magnitude speed up is gained over the standard shape context classifier, with minimal losses in classification accuracy.National Science Foundation (IIS-0308213, IIS-0329009, EIA-0202067); Office of Naval Research (N00014-03-1-0108

    The k-mismatch problem revisited

    Get PDF
    We revisit the complexity of one of the most basic problems in pattern matching. In the k-mismatch problem we must compute the Hamming distance between a pattern of length m and every m-length substring of a text of length n, as long as that Hamming distance is at most k. Where the Hamming distance is greater than k at some alignment of the pattern and text, we simply output "No". We study this problem in both the standard offline setting and also as a streaming problem. In the streaming k-mismatch problem the text arrives one symbol at a time and we must give an output before processing any future symbols. Our main results are as follows: 1) Our first result is a deterministic O(nk2logk/m+npolylogm)O(n k^2\log{k} / m+n \text{polylog} m) time offline algorithm for k-mismatch on a text of length n. This is a factor of k improvement over the fastest previous result of this form from SODA 2000 by Amihood Amir et al. 2) We then give a randomised and online algorithm which runs in the same time complexity but requires only O(k2polylogm)O(k^2\text{polylog} {m}) space in total. 3) Next we give a randomised (1+ϵ)(1+\epsilon)-approximation algorithm for the streaming k-mismatch problem which uses O(k2polylogm/ϵ2)O(k^2\text{polylog} m / \epsilon^2) space and runs in O(polylogm/ϵ2)O(\text{polylog} m / \epsilon^2) worst-case time per arriving symbol. 4) Finally we combine our new results to derive a randomised O(k2polylogm)O(k^2\text{polylog} {m}) space algorithm for the streaming k-mismatch problem which runs in O(klogk+polylogm)O(\sqrt{k}\log{k} + \text{polylog} {m}) worst-case time per arriving symbol. This improves the best previous space complexity for streaming k-mismatch from FOCS 2009 by Benny Porat and Ely Porat by a factor of k. We also improve the time complexity of this previous result by an even greater factor to match the fastest known offline algorithm (up to logarithmic factors)
    corecore