2 research outputs found

    Learning SMaLL Predictors

    Full text link
    We present a new machine learning technique for training small resource-constrained predictors. Our algorithm, the Sparse Multiprototype Linear Learner (SMaLL), is inspired by the classic machine learning problem of learning kk-DNF Boolean formulae. We present a formal derivation of our algorithm and demonstrate the benefits of our approach with a detailed empirical study

    Online Markov Decoding: Lower Bounds and Near-Optimal Approximation Algorithms

    Full text link
    We resolve the fundamental problem of online decoding with general nthn^{th} order ergodic Markov chain models. Specifically, we provide deterministic and randomized algorithms whose performance is close to that of the optimal offline algorithm even when latency is small. Our algorithms admit efficient implementation via dynamic programs, and readily extend to (adversarial) non-stationary or time-varying settings. We also establish lower bounds for online methods under latency constraints in both deterministic and randomized settings, and show that no online algorithm can perform significantly better than our algorithms. Empirically, just with latency one, our algorithm outperforms the online step algorithm by over 30\% in terms of decoding agreement with the optimal algorithm on genome sequence data.Comment: Added experiments, fixed typos, and polished presentation. Currently under revie
    corecore