33 research outputs found

    Book reports

    Get PDF

    CaSPR: Learning Canonical Spatiotemporal Point Cloud Representations

    Full text link
    We propose CaSPR, a method to learn object-centric Canonical Spatiotemporal Point Cloud Representations of dynamically moving or evolving objects. Our goal is to enable information aggregation over time and the interrogation of object state at any spatiotemporal neighborhood in the past, observed or not. Different from previous work, CaSPR learns representations that support spacetime continuity, are robust to variable and irregularly spacetime-sampled point clouds, and generalize to unseen object instances. Our approach divides the problem into two subtasks. First, we explicitly encode time by mapping an input point cloud sequence to a spatiotemporally-canonicalized object space. We then leverage this canonicalization to learn a spatiotemporal latent representation using neural ordinary differential equations and a generative model of dynamically evolving shapes using continuous normalizing flows. We demonstrate the effectiveness of our method on several applications including shape reconstruction, camera pose estimation, continuous spatiotemporal sequence reconstruction, and correspondence estimation from irregularly or intermittently sampled observations.Comment: NeurIPS 202

    Learning efficient image representations: Connections between statistics and neuroscience

    Get PDF
    This thesis summarizes different works developed in the framework of analyzing the relation between image processing, statistics and neuroscience. These relations are analyzed from the efficient coding hypothesis point of view (H. Barlow [1961] and Attneave [1954]). This hypothesis suggests that the human visual system has been adapted during the ages in order to process the visual information in an efficient way, i.e. taking advantage of the statistical regularities of the visual world. Under this classical idea different works in different directions are developed. One direction is analyzing the statistical properties of a revisited, extended and fitted classical model of the human visual system. No statistical information is used in the model. Results show that this model obtains a representation with good statistical properties, which is a new evidence in favor of the efficient coding hypothesis. From the statistical point of view, different methods are proposed and optimized using natural images. The models obtained using these statistical methods show similar behavior to the human visual system, both in the spatial and color dimensions, which are also new evidences of the efficient coding hypothesis. Applications in image processing are an important part of the Thesis. Statistical and neuroscience based methods are employed to develop a wide set of image processing algorithms. Results of these methods in denoising, classification, synthesis and quality assessment are comparable to some of the most successful current methods

    ROBUST SPEAKER RECOGNITION BASED ON LATENT VARIABLE MODELS

    Get PDF
    Automatic speaker recognition in uncontrolled environments is a very challenging task due to channel distortions, additive noise and reverberation. To address these issues, this thesis studies probabilistic latent variable models of short-term spectral information that leverage large amounts of data to achieve robustness in challenging conditions. Current speaker recognition systems represent an entire speech utterance as a single point in a high-dimensional space. This representation is known as "supervector". This thesis starts by analyzing the properties of this representation. A novel visualization procedure of supervectors is presented by which qualitative insight about the information being captured is obtained. We then propose the use of an overcomplete dictionary to explicitly decompose a supervector into a speaker-specific component and an undesired variability component. An algorithm to learn the dictionary from a large collection of data is discussed and analyzed. A subset of the entries of the dictionary is learned to represent speaker-specific information and another subset to represent distortions. After encoding the supervector as a linear combination of the dictionary entries, the undesired variability is removed by discarding the contribution of the distortion components. This paradigm is closely related to the previously proposed paradigm of Joint Factor Analysis modeling of supervectors. We establish a connection between the two approaches and show how our proposed method provides improvements in terms of computation and recognition accuracy. An alternative way to handle undesired variability in supervector representations is to first project them into a lower dimensional space and then to model them in the reduced subspace. This low-dimensional projection is known as "i-vector". Unfortunately, i-vectors exhibit non-Gaussian behavior, and direct statistical modeling requires the use of heavy-tailed distributions for optimal performance. These approaches lack closed-form solutions, and therefore are hard to analyze. Moreover, they do not scale well to large datasets. Instead of directly modeling i-vectors, we propose to first apply a non-linear transformation and then use a linear-Gaussian model. We present two alternative transformations and show experimentally that the transformed i-vectors can be optimally modeled by a simple linear-Gaussian model (factor analysis). We evaluate our method on a benchmark dataset with a large amount of channel variability and show that the results compare favorably against the competitors. Also, our approach has closed-form solutions and scales gracefully to large datasets. Finally, a multi-classifier architecture trained on a multicondition fashion is proposed to address the problem of speaker recognition in the presence of additive noise. A large number of experiments are conducted to analyze the proposed architecture and to obtain guidelines for optimal performance in noisy environments. Overall, it is shown that multicondition training of multi-classifier architectures not only produces great robustness in the anticipated conditions, but also generalizes well to unseen conditions

    Contour Detection-based Discovery of Mid-level Discriminative Patches for Scene Classification

    Get PDF
    Feature extraction and representation is a key step in scene classification. In this paper, a contour detection-based mid-level features learning method is proposed for scene classification. First, a sketch tokens-based contour detection scheme is proposed to initialize seed blocks for learning mid-level patches and the patches with more contour pixels are selected as seed blocks. The procedure is demonstrated to be helpful for scene classification. Next, the seed blocks are employed to train an exemplar SVM to discover other similar occurrences and an entropy-rank criterion is utilized to mine the discriminative patches. Finally, scene categories are identified by matching the discriminative patches and testing images. Extensive experiments on the MIT Indoor-67 dataset, the 15-scene dataset and the UIUC-sports dataset show that the proposed approach yields better performance than other state-of-the-art counterparts

    Estimating Information in Earth System Data with Machine Learning

    Get PDF
    El aprendizaje automático ha hecho grandes avances en la ciencia e ingeniería actuales en general y en las ciencias de la Tierra en particular. Sin embargo, los datos de la Tierra plantean problemas particularmente difíciles para el aprendizaje automático debido no sólo al volumen de datos implicado, sino también por la presencia de correlaciones no lineales tanto espaciales como temporales, por una gran diversidad de fuentes de ruido y de incertidumbre, así como por la heterogeneidad de las fuentes de información involucradas. Más datos no implica necesariamente más información. Por lo tanto, extraer conocimiento y contenido informativo mediante el análisis y el modelado de datos resulta crucial, especialmente ahora donde el volumen y la heterogeneidad de los datos aumentan constantemente. Este hecho requiere avances en métodos que puedan cuantificar la información y caracterizar las distribuciones e incertidumbres con precisión. Cuantificar el contenido informativo a los datos y los modelos de nuestro sistema son problemas no resueltos en estadística y el aprendizaje automático. Esta tesis introduce nuevos modelos de aprendizaje automático para extraer conocimiento e información a partir de datos de observación de la Tierra. Proponemos métodos núcleo ('kernel methods'), procesos gaussianos y gaussianización multivariada para tratar la incertidumbre y la cuantificación de la información, y aplicamos estos métodos a una amplia gama de problemas científicos del sistema terrestre. Estos conllevan muchos tipos de problemas de aprendizaje, incluida la clasificación, regresión, estimación de densidad, síntesis, propagación de errores y estimación de medidas teóricas de la información. También demostramos cómo funcionan estos métodos con diferentes fuentes de datos, provenientes de distintos sensores (radar, multiespectrales, hiperespectrales), productos de datos (observaciones, reanálisis y simulaciones de modelos) y cubos de datos (agregados de varias fuentes de datos espacial-temporales ). Las metodologías presentadas nos permiten cuantificar y visualizar cuáles son las características relevantes que gobiernan distintos métodos núcleo, tales como clasificadores, métodos de regresión o incluso las medidas de independencia estadística, como propagar mejor los errores y las distorsiones de los datos de entrada con procesos gaussianos, así como dónde y cuándo se puede encontrar más información en cubos arbitrarios espacio-temporales. Las técnicas presentadas abren una amplia gama de posibles casos de uso y de aplicaciones, con las que prevemos un uso más extenso y robusto de algoritmos estadísticos en las ciencias de la Tierra y el clima.Machine learning has made great strides in today's Science and engineering in general and Earth Sciences in particular. However, Earth data poses particularly challenging problems for machine learning due to not only the volume of data, but also the spatial-temporal nonlinear correlations, noise and uncertainty sources, and heterogeneous sources of information. More data does not necessarily imply more information. Therefore, extracting knowledge and information content using data analysis and modeling is important and is especially prevalent in an era where data volume and heterogeneity is steadily increasing. This calls for advances in methods that can quantify information and characterize distributions accurately. Quantifying information content within our system's data and models are still unresolved problems in statistics and machine learning. This thesis introduces new machine learning models to extract knowledge and information from Earth data. We propose kernel methods, Gaussian processes and multivariate Gaussianization to handle uncertainty and information quantification and we apply these methods to a wide range of Earth system science problems. These involve many types of learning problems including classification, regression, density estimation, synthesis, error propagation and information-theoretic measures estimation. We also demonstrate how these methods perform with different data sources including sensory data (radar, multispectral, hyperspectral, infrared sounders), data products (observations, reanalysis and model simulations) and data cubes (aggregates of various spatial-temporal data sources). The presented methodologies allow us to quantify and visualize what are the salient features driving kernel classifiers, regressors or dependence measures, how to better propagate errors and distortions of input data with Gaussian processes, and where and when more information can be found in arbitrary spatial-temporal data cubes. The presented techniques open a wide range of possible use cases and applications and we anticipate a wider adoption in the Earth sciences

    Reconocimiento de huellas dactilares para aplicaciones forenses

    Full text link
    Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Tecnología Electrónica y de las Comunicaciones. Fecha de lectura: mayo de 2015The author was awarded with a European Commission Marie Curie Fellowship under the Innovative Training Networks (ITN) in the project Bayesian Biometrics for Forensics (BBfor2, FP7-PEOPLE-ITN-2008) under Grant Agreement number 238803 between 2011 and 2013. The author was also funded through the European Union Project - Biometrics Evaluation and Testing (BEAT) for 2014 and 2015 which supported the research summarized in this Dissertatio

    Bio-Inspired Computer Vision: Towards a Synergistic Approach of Artificial and Biological Vision

    Get PDF
    To appear in CVIUStudies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level. In this paper we aim to bridge this gap by providing a computer vision task centric presentation of models primarily originating in biological vision studies. Not only do we revisit some of the main features of biological vision and discuss the foundations of existing computational studies modelling biological vision, but also we consider three classical computer vision tasks from a biological perspective: image sensing, segmentation and optical flow. Using this task-centric approach, we discuss well-known biological functional principles and compare them with approaches taken by computer vision. Based on this comparative analysis of computer and biological vision, we present some recent models in biological vision and highlight a few models that we think are promising for future investigations in computer vision. To this extent, this paper provides new insights and a starting point for investigators interested in the design of biology-based computer vision algorithms and pave a way for much needed interaction between the two communities leading to the development of synergistic models of artificial and biological vision

    Attention Mechanism for Recognition in Computer Vision

    Get PDF
    It has been proven that humans do not focus their attention on an entire scene at once when they perform a recognition task. Instead, they pay attention to the most important parts of the scene to extract the most discriminative information. Inspired by this observation, in this dissertation, the importance of attention mechanism in recognition tasks in computer vision is studied by designing novel attention-based models. In specific, four scenarios are investigated that represent the most important aspects of attention mechanism.First, an attention-based model is designed to reduce the visual features\u27 dimensionality by selectively processing only a small subset of the data. We study this aspect of the attention mechanism in a framework based on object recognition in distributed camera networks. Second, an attention-based image retrieval system (i.e., person re-identification) is proposed which learns to focus on the most discriminative regions of the person\u27s image and process those regions with higher computation power using a deep convolutional neural network. Furthermore, we show how visualizing the attention maps can make deep neural networks more interpretable. In other words, by visualizing the attention maps we can observe the regions of the input image where the neural network relies on, in order to make a decision. Third, a model for estimating the importance of the objects in a scene based on a given task is proposed. More specifically, the proposed model estimates the importance of the road users that a driver (or an autonomous vehicle) should pay attention to in a driving scenario in order to have safe navigation. In this scenario, the attention estimation is the final output of the model. Fourth, an attention-based module and a new loss function in a meta-learning based few-shot learning system is proposed in order to incorporate the context of the task into the feature representations of the samples and increasing the few-shot recognition accuracy.In this dissertation, we showed that attention can be multi-facet and studied the attention mechanism from the perspectives of feature selection, reducing the computational cost, interpretable deep learning models, task-driven importance estimation, and context incorporation. Through the study of four scenarios, we further advanced the field of where \u27\u27attention is all you need\u27\u27
    corecore