1,337 research outputs found

    Object identification by using orthonormal circus functions from the trace transform

    Get PDF
    In this paper we present an efficient way to both compute and extract salient information from trace transform signatures to perform object identification tasks. We also present a feature selection analysis of the classical trace-transform functionals, which reveals that most of them retrieve redundant information causing misleading similarity measurements. In order to overcome this problem, we propose a set of functionals based on Laguerre polynomials that return orthonormal signatures between these functionals. In this way, each signature provides salient and non-correlated information that contributes to the description of an image object. The proposed functionals were tested considering a vehicle identification problem, outperforming the classical trace transform functionals in terms of computational complexity and identification rate

    Group invariance principles for causal generative models

    Full text link
    The postulate of independence of cause and mechanism (ICM) has recently led to several new causal discovery algorithms. The interpretation of independence and the way it is utilized, however, varies across these methods. Our aim in this paper is to propose a group theoretic framework for ICM to unify and generalize these approaches. In our setting, the cause-mechanism relationship is assessed by comparing it against a null hypothesis through the application of random generic group transformations. We show that the group theoretic view provides a very general tool to study the structure of data generating mechanisms with direct applications to machine learning.Comment: 16 pages, 6 figure

    Robust signatures for 3D face registration and recognition

    Get PDF
    PhDBiometric authentication through face recognition has been an active area of research for the last few decades, motivated by its application-driven demand. The popularity of face recognition, compared to other biometric methods, is largely due to its minimum requirement of subject co-operation, relative ease of data capture and similarity to the natural way humans distinguish each other. 3D face recognition has recently received particular interest since three-dimensional face scans eliminate or reduce important limitations of 2D face images, such as illumination changes and pose variations. In fact, three-dimensional face scans are usually captured by scanners through the use of a constant structured-light source, making them invariant to environmental changes in illumination. Moreover, a single 3D scan also captures the entire face structure and allows for accurate pose normalisation. However, one of the biggest challenges that still remain in three-dimensional face scans is the sensitivity to large local deformations due to, for example, facial expressions. Due to the nature of the data, deformations bring about large changes in the 3D geometry of the scan. In addition to this, 3D scans are also characterised by noise and artefacts such as spikes and holes, which are uncommon with 2D images and requires a pre-processing stage that is speci c to the scanner used to capture the data. The aim of this thesis is to devise a face signature that is compact in size and overcomes the above mentioned limitations. We investigate the use of facial regions and landmarks towards a robust and compact face signature, and we study, implement and validate a region-based and a landmark-based face signature. Combinations of regions and landmarks are evaluated for their robustness to pose and expressions, while the matching scheme is evaluated for its robustness to noise and data artefacts

    Category-Specific Object Reconstruction from a Single Image

    Full text link
    Object reconstruction from a single image -- in the wild -- is a problem where we can make progress and get meaningful results today. This is the main message of this paper, which introduces an automated pipeline with pixels as inputs and 3D surfaces of various rigid categories as outputs in images of realistic scenes. At the core of our approach are deformable 3D models that can be learned from 2D annotations available in existing object detection datasets, that can be driven by noisy automatic object segmentations and which we complement with a bottom-up module for recovering high-frequency shape details. We perform a comprehensive quantitative analysis and ablation study of our approach using the recently introduced PASCAL 3D+ dataset and show very encouraging automatic reconstructions on PASCAL VOC.Comment: First two authors contributed equally. To appear at CVPR 201

    Processing mesh animations: from static to dynamic geometry and back

    Get PDF
    Static triangle meshes are the representation of choice for artificial objects, as well as for digital replicas of real objects. They have proven themselves to be a solid foundation for further processing. Although triangle meshes are handy in general, it may seem that their discrete approximation of reality is a downside. But in fact, the opposite is true. The approximation of the real object's shape remains the same, even if we willfully change the vertex positions in the mesh, which allows us to optimize it in this way. Due to modern acquisition methods, such a step is always beneficial, often even required, prior to further processing of the acquired triangle mesh. Therefore, we present a general framework for optimizing surface meshes with respect to various target criteria. Because of the simplicity and efficiency of the setup it can be adapted to a variety of applications. Although this framework was initially designed for single static meshes, the application to a set of meshes is straightforward. For example, we convert a set of meshes into compatible ones and use them as basis for creating dynamic geometry. Consequently, we propose an interpolation method which is able to produce visually plausible interpolation results, even if the compatible input meshes differ by large rotations. The method can be applied to any number of input vertex configurations and due to the utilization of a hierarchical scheme, the approach is fast and can be used for very large meshes. Furthermore, we consider the opposite direction. Given an animation sequence, we propose a pre-processing algorithm that considerably reduces the number of meshes required to describe the sequence, thus yielding a compact representation. Our method is based on a clustering and classification approach, which can be utilized to automatically find the most prominent meshes of the sequence. The original meshes can then be expressed as linear combinations of these few representative meshes with only small approximation errors. Finally, we investigate the shape space spanned by those few meshes and show how to apply different interpolation schemes to create other shape spaces, which are not based on vertex coordinates. We conclude with a careful analysis of these shape spaces and their usability for a compact representation of an animation sequence

    Non-isometric 3D shape registration.

    Get PDF
    3D shape registration is an important task in computer graphics and computer vision. It has been widely used in the area of film industry, 3D animation, video games and AR/VR assets creation. Manually creating the 3D model of a character from scratch is tedious and time consuming, and it can only be completed by professional trained artists. With the development of 3D geometry acquisition technology, it becomes easier and cheaper to capture high-resolution and highly detailed 3D geometries. However, the scanned data are often incomplete or noisy and therefore cannot be employed directly. To deal with the above two problems, one typical and efficient solution is to deform an existing high-quality model (template) to fit the scanned data (target). Shape registration as an essential technique to do so has been arousing intensive attention. In last decades, various shape registration approaches have been proposed for accurate template fitting. However, there are still some remaining challenges. It is well known that the template can be largely different with the target in respect of size and pose. With the large (usually non-isometric) deformation between them, the shear distortion can easily occur, which may lead to poor results, such as degenerated triangles, fold-overs. Before deforming the template towards the target, reliable correspondences between them should be found first. Incorrect correspondences give the wrong deformation guidance, which can also easily produce fold-overs. As mentioned before, the target always comes with noise. This is the part we want to filter out and try not to fit the template on it. Hence, non-isometric shape registration robust to noise is highly desirable in the scene of geometry modelling from the scanned data. In this PhD research, we address existing challenges in shape registration, including how to prevent the deformation distortion, how to reduce the foldover occurrence and how to deal with the noise in the target. Novel methods including consistent as-similar as-possible surface deformation and robust Huber-L1 surface registration are proposed, which are validated through experimental comparison with state-of-the-arts. The deformation technique plays an important role in shape registration. In this research, a consistent as similar-as-possible (CASAP) surface deformation approach is proposed. Starting from investigating the continuous deformation energy, we analyse the existing term to make the discrete energy converge to the continuous one, whose property we called as energy consistency. Based on the deformation method, a novel CASAP non-isometric surface registration method is proposed. The proposed registration method well preserves the angles of triangles in the template surface so that least distortion is introduced during the surface deformation and thus reduce the risk of fold-over and self-intersection. To reduce the noise influence, a Huber-L1 based non-isometric surface registration is proposed, where a Huber-L1 regularized model constrained on the transformation variation and position difference. The proposed method is robust to noise and produces piecewise smooth results while still preserving fine details on the target. We evaluate and validate our methods through extensive experiments, whose results have demonstrated that the proposed methods in this thesis are more accurate and robust to noise in comparison of the state-of-the arts and enable us to produce high quality models with little efforts
    corecore