2,003 research outputs found

    M\"obius Invariants of Shapes and Images

    Full text link
    Identifying when different images are of the same object despite changes caused by imaging technologies, or processes such as growth, has many applications in fields such as computer vision and biological image analysis. One approach to this problem is to identify the group of possible transformations of the object and to find invariants to the action of that group, meaning that the object has the same values of the invariants despite the action of the group. In this paper we study the invariants of planar shapes and images under the M\"obius group PSL(2,C)\mathrm{PSL}(2,\mathbb{C}), which arises in the conformal camera model of vision and may also correspond to neurological aspects of vision, such as grouping of lines and circles. We survey properties of invariants that are important in applications, and the known M\"obius invariants, and then develop an algorithm by which shapes can be recognised that is M\"obius- and reparametrization-invariant, numerically stable, and robust to noise. We demonstrate the efficacy of this new invariant approach on sets of curves, and then develop a M\"obius-invariant signature of grey-scale images

    Semantic-Context-Based Augmented Descriptor For Image Feature Matching

    Get PDF
    Abstract. This paper proposes an augmented version of local features that enhances the discriminative power of the feature without affecting its invariance to image deformations. The idea is about learning local features, aiming to estimate its semantic, which is then exploited in conjunction with the bag of words paradigm to build an augmented feature descriptor. Basically, any local descriptor can be casted in the proposed context, and thus the approach can be easy generalized to fit in with any local approach. The semantic-context signature is a 2D histogram which accumulates the spatial distribution of the visual words around each local feature. The obtained semantic-context component is concatenated with the local feature to generate our proposed feature descriptor. This is expected to handle ambiguities occurring in images with multiple similar motifs and depicting slight complicated non-affine distortions, outliers, and detector errors. The approach is evaluated for two data sets. The first one is intentionally selected with images containing multiple similar regions and depicting slight non-affine distortions. The second is the standard data set of Mikolajczyk. The evaluation results showed our approach performs significantly better than expected results as well as in comparison with other methods.

    Grounding semantics in robots for Visual Question Answering

    Get PDF
    In this thesis I describe an operational implementation of an object detection and description system that incorporates in an end-to-end Visual Question Answering system and evaluated it on two visual question answering datasets for compositional language and elementary visual reasoning

    A Review of Codebook Models in Patch-Based Visual Object Recognition

    No full text
    The codebook model-based approach, while ignoring any structural aspect in vision, nonetheless provides state-of-the-art performances on current datasets. The key role of a visual codebook is to provide a way to map the low-level features into a fixed-length vector in histogram space to which standard classifiers can be directly applied. The discriminative power of such a visual codebook determines the quality of the codebook model, whereas the size of the codebook controls the complexity of the model. Thus, the construction of a codebook is an important step which is usually done by cluster analysis. However, clustering is a process that retains regions of high density in a distribution and it follows that the resulting codebook need not have discriminant properties. This is also recognised as a computational bottleneck of such systems. In our recent work, we proposed a resource-allocating codebook, to constructing a discriminant codebook in a one-pass design procedure that slightly outperforms more traditional approaches at drastically reduced computing times. In this review we survey several approaches that have been proposed over the last decade with their use of feature detectors, descriptors, codebook construction schemes, choice of classifiers in recognising objects, and datasets that were used in evaluating the proposed methods

    Visual Representations: Defining Properties and Deep Approximations

    Full text link
    Visual representations are defined in terms of minimal sufficient statistics of visual data, for a class of tasks, that are also invariant to nuisance variability. Minimal sufficiency guarantees that we can store a representation in lieu of raw data with smallest complexity and no performance loss on the task at hand. Invariance guarantees that the statistic is constant with respect to uninformative transformations of the data. We derive analytical expressions for such representations and show they are related to feature descriptors commonly used in computer vision, as well as to convolutional neural networks. This link highlights the assumptions and approximations tacitly assumed by these methods and explains empirical practices such as clamping, pooling and joint normalization.Comment: UCLA CSD TR140023, Nov. 12, 2014, revised April 13, 2015, November 13, 2015, February 28, 201
    corecore