380 research outputs found

    Hierarchical subspace models for contingency tables

    Get PDF
    For statistical analysis of multiway contingency tables we propose modeling interaction terms in each maximal compact component of a hierarchical model. By this approach we can search for parsimonious models with smaller degrees of freedom than the usual hierarchical model, while preserving conditional independence structures in the hierarchical model. We discuss estimation and exacts tests of the proposed model and illustrate the advantage of the proposed modeling with some data sets.Comment: 26 page

    Efficient and Scalable 4-th order Match Propagation

    Get PDF
    International audienceWe propose a robust method to match image feature points taking into account geometric consistency. It is a careful adaptation of the match propagation principle to 4th-order geometric constraints (match quadruple consistency). With our method, a set of matches is explained by a network of locally-similar affinities. This approach is useful when simple descriptor-based matching strategies fail, in particular for highly ambiguous data, e.g., with repetitive patterns or where texture is lacking. As it scales easily to hundreds of thousands of matches, it is also useful when denser point distributions are sought, e.g., for high-precision rigid model estimation. Experiments show that our method is competitive (efficient, scalable, accurate, robust) against state-of-the-art methods in deformable object matching, camera calibration and pattern detection

    Recurrently Predicting Hypergraphs

    Full text link
    This work considers predicting the relational structure of a hypergraph for a given set of vertices, as common for applications in particle physics, biological systems and other complex combinatorial problems. A problem arises from the number of possible multi-way relationships, or hyperedges, scaling in O(2n)\mathcal{O}(2^n) for a set of nn elements. Simply storing an indicator tensor for all relationships is already intractable for moderately sized nn, prompting previous approaches to restrict the number of vertices a hyperedge connects. Instead, we propose a recurrent hypergraph neural network that predicts the incidence matrix by iteratively refining an initial guess of the solution. We leverage the property that most hypergraphs of interest are sparsely connected and reduce the memory requirement to O(nk)\mathcal{O}(nk), where kk is the maximum number of positive edges, i.e., edges that actually exist. In order to counteract the linearly growing memory cost from training a lengthening sequence of refinement steps, we further propose an algorithm that applies backpropagation through time on randomly sampled subsequences. We empirically show that our method can match an increase in the intrinsic complexity without a performance decrease and demonstrate superior performance compared to state-of-the-art models

    Labeling the Features Not the Samples: Efficient Video Classification with Minimal Supervision

    Full text link
    Feature selection is essential for effective visual recognition. We propose an efficient joint classifier learning and feature selection method that discovers sparse, compact representations of input features from a vast sea of candidates, with an almost unsupervised formulation. Our method requires only the following knowledge, which we call the \emph{feature sign}---whether or not a particular feature has on average stronger values over positive samples than over negatives. We show how this can be estimated using as few as a single labeled training sample per class. Then, using these feature signs, we extend an initial supervised learning problem into an (almost) unsupervised clustering formulation that can incorporate new data without requiring ground truth labels. Our method works both as a feature selection mechanism and as a fully competitive classifier. It has important properties, low computational cost and excellent accuracy, especially in difficult cases of very limited training data. We experiment on large-scale recognition in video and show superior speed and performance to established feature selection approaches such as AdaBoost, Lasso, greedy forward-backward selection, and powerful classifiers such as SVM.Comment: arXiv admin note: text overlap with arXiv:1411.771

    Differential equation approximations for Markov chains

    Full text link
    We formulate some simple conditions under which a Markov chain may be approximated by the solution to a differential equation, with quantifiable error probabilities. The role of a choice of coordinate functions for the Markov chain is emphasised. The general theory is illustrated in three examples: the classical stochastic epidemic, a population process model with fast and slow variables, and core-finding algorithms for large random hypergraphs.Comment: Published in at http://dx.doi.org/10.1214/07-PS121 the Probability Surveys (http://www.i-journals.org/ps/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Software Engineering and Complexity in Effective Algebraic Geometry

    Full text link
    We introduce the notion of a robust parameterized arithmetic circuit for the evaluation of algebraic families of multivariate polynomials. Based on this notion, we present a computation model, adapted to Scientific Computing, which captures all known branching parsimonious symbolic algorithms in effective Algebraic Geometry. We justify this model by arguments from Software Engineering. Finally we exhibit a class of simple elimination problems of effective Algebraic Geometry which require exponential time to be solved by branching parsimonious algorithms of our computation model.Comment: 70 pages. arXiv admin note: substantial text overlap with arXiv:1201.434
    corecore