3,889 research outputs found

    Recovery thresholds in the sparse planted matching problem

    Get PDF
    We consider the statistical inference problem of recovering an unknown perfect matching, hidden in a weighted random graph, by exploiting the information arising from the use of two different distributions for the weights on the edges inside and outside the planted matching. A recent work has demonstrated the existence of a phase transition, in the large size limit, between a full and a partial recovery phase for a specific form of the weights distribution on fully connected graphs. We generalize and extend this result in two directions: we obtain a criterion for the location of the phase transition for generic weights distributions and possibly sparse graphs, exploiting a technical connection with branching random walk processes, as well as a quantitatively more precise description of the critical regime around the phase transition.Comment: 19 pages, 8 figure

    Stuctured Predictions Cascades

    Get PDF
    Structured prediction tasks pose a fundamental trade off between the need for model complexity to increase predictive power and the limited computational resources for inference in the exponentially-sized output spaces such models require. We formulate and develop structured prediction cascades: a sequence of increasingly complex models that progressively filter the space of possible outputs. We represent an exponentially large set of filtered outputs using max marginals and propose a novel convex loss function that balances filtering error with filtering efficiency. We provide generalization bounds for these loss functions and evaluate our approach on handwriting recognition and part-of-speech tagging. We find that the learned cascades are capable of reducing the complexity of inference by up to five orders of magnitude, enabling the use of models which incorporate higher order features and yield higher accuracy

    Algorithms for learning parsimonious context trees

    Get PDF
    Parsimonious context trees, PCTs, provide a sparse parameterization of conditional probability distributions. They are particularly powerful for modeling context-specific independencies in sequential discrete data. Learning PCTs from data is computationally hard due to the combinatorial explosion of the space of model structures as the number of predictor variables grows. Under the score-and-search paradigm, the fastest algorithm for finding an optimal PCT, prior to the present work, is based on dynamic programming. While the algorithm can handle small instances fast, it becomes infeasible already when there are half a dozen four-state predictor variables. Here, we show that common scoring functions enable the use of new algorithmic ideas, which can significantly expedite the dynamic programming algorithm on typical data. Specifically, we introduce a memoization technique, which exploits regularities within the predictor variables by equating different contexts associated with the same data subset, and a bound-and-prune technique, which exploits regularities within the response variable by pruning parts of the search space based on score upper bounds. On real-world data from recent applications of PCTs within computational biology the ideas are shown to reduce the traversed search space and the computation time by several orders of magnitude in typical cases.Peer reviewe
    • …
    corecore