102 research outputs found

    Bayesian non-parametrics for multi-modal segmentation

    Get PDF
    Segmentation is a fundamental and core problem in computer vision research which has applications in many tasks, such as object recognition, content-based image retrieval, and semantic labelling. To partition the data into groups coherent in one or more characteristics such as semantic classes, is often a first step towards understanding the content of data. As information in the real world is generally perceived in multiple modalities, segmentation performed on multi-modal data for extracting the latent structure usually encounters a challenge: how to combine features from multiple modalities and resolve accidental ambiguities. This thesis tackles three main axes of multi-modal segmentation problems: video segmentation and object discovery, activity segmentation and discovery, and segmentation in 3D data. For the first two axes, we introduce non-parametric Bayesian approaches for segmenting multi-modal data collections, including groups of videos and context sensor streams. The proposed method shows benefits on: integrating multiple features and data dependencies in a probabilistic formulation, inferring the number of clusters from data and hierarchical semantic partitions, as well as resolving ambiguities by joint segmentation across videos or streams. The third axis focuses on the robust use of 3D information for various applications, as 3D perception provides richer geometric structure and holistic observation of the visual scene. The studies covered in this thesis for utilizing various types of 3D data include: 3D object segmentation based on Kinect depth sensing improved by cross-modal stereo, matching 3D CAD models to objects on 2D image plane by exploiting the differentiability of the HOG descriptor, segmenting stereo videos based on adaptive ensemble models, and fusing 2D object detectors with 3D context information for an augmented reality application scenario.Segmentierung ist ein zentrales problem in der Computer Vision Forschung mit Anwendungen in vielen Bereichen wie der Objekterkennung, der inhaltsbasierten Bildsuche und dem semantischen Labelling. Daten in Gruppen zu partitionieren, die in einer oder mehreren Eigenschaften wie zum Beispiel der semantischen Klasse ĂŒbereinstimmen, ist oft ein erster Schritt in Richtung Inhaltsanalyse. Da Informationen in der realen Welt im Allgemeinen multi-modal wahrgenommen werden, wird die Segmentierung auf multi-modale Daten angewendet und die latente Struktur dahinter extrahiert. Dies stellt in der Regel eine Herausforderung dar: Wie kombiniert man Merkmale aus mehreren ModalitĂ€ten und beseitigt zufĂ€llige Mehrdeutigkeiten? Diese Doktorarbeit befasst sich mit drei Hauptachsen multi-modaler Segmentierungsprobleme: Videosegmentierung und Objektentdeckung, AktivitĂ€tssegmentierung und –entdeckung, sowie Segmentierung von 3D Daten. FĂŒr die ersten beiden Achsen fĂŒhren wir nichtparametrische Bayessche AnsĂ€tze ein um multi-modale DatensĂ€tze wie Videos und Kontextsensor-Ströme zu segmentieren. Die vorgeschlagene Methode zeigt Vorteile in folgenden Bereichen: Integration multipler Merkmale und DatenabhĂ€ngigkeiten in probabilistischen Formulierungen, Bestimmung der Anzahl der Cluster und hierarchische, semantischen Partitionen, sowie die Beseitigung von Mehrdeutigkeiten in gemeinsamen Segmentierungen in Videos und Sensor-Strömen. Die dritte Achse konzentiert sich auf die robuste Nutzung von 3D Informationen fĂŒr verschiedene Anwendungen. So bietet die 3D-Wahrnehmung zum Beispiel reichere geometrische Strukturen und eine holistische Betrachtung der sichtbaren Szene. Die Untersuchungen, die in dieser Arbeit zur Nutzung verschiedener Arten von 3D-Daten vorgestellt werden, umfassen: die 3D-Objektsegmentierung auf Basis der Kinect Tiefenmessung, verbessert durch cross-modale Stereoverfahren, die Anpassung von 3D-CAD-Modellen auf Objekte in der 2D-Bildebene durch Ausnutzung der Differenzierbarkeit des HOG-Descriptors, die Segmentierung von Stereo-Videos, basierend auf adaptiven Ensemble-Modellen, sowie der Verschmelzung von 2D- Objektdetektoren mit 3D-Kontextinformationen fĂŒr ein Augmented-Reality Anwendungsszenario

    Bayesian non-parametrics for multi-modal segmentation

    Get PDF
    Segmentation is a fundamental and core problem in computer vision research which has applications in many tasks, such as object recognition, content-based image retrieval, and semantic labelling. To partition the data into groups coherent in one or more characteristics such as semantic classes, is often a first step towards understanding the content of data. As information in the real world is generally perceived in multiple modalities, segmentation performed on multi-modal data for extracting the latent structure usually encounters a challenge: how to combine features from multiple modalities and resolve accidental ambiguities. This thesis tackles three main axes of multi-modal segmentation problems: video segmentation and object discovery, activity segmentation and discovery, and segmentation in 3D data. For the first two axes, we introduce non-parametric Bayesian approaches for segmenting multi-modal data collections, including groups of videos and context sensor streams. The proposed method shows benefits on: integrating multiple features and data dependencies in a probabilistic formulation, inferring the number of clusters from data and hierarchical semantic partitions, as well as resolving ambiguities by joint segmentation across videos or streams. The third axis focuses on the robust use of 3D information for various applications, as 3D perception provides richer geometric structure and holistic observation of the visual scene. The studies covered in this thesis for utilizing various types of 3D data include: 3D object segmentation based on Kinect depth sensing improved by cross-modal stereo, matching 3D CAD models to objects on 2D image plane by exploiting the differentiability of the HOG descriptor, segmenting stereo videos based on adaptive ensemble models, and fusing 2D object detectors with 3D context information for an augmented reality application scenario.Segmentierung ist ein zentrales problem in der Computer Vision Forschung mit Anwendungen in vielen Bereichen wie der Objekterkennung, der inhaltsbasierten Bildsuche und dem semantischen Labelling. Daten in Gruppen zu partitionieren, die in einer oder mehreren Eigenschaften wie zum Beispiel der semantischen Klasse ĂŒbereinstimmen, ist oft ein erster Schritt in Richtung Inhaltsanalyse. Da Informationen in der realen Welt im Allgemeinen multi-modal wahrgenommen werden, wird die Segmentierung auf multi-modale Daten angewendet und die latente Struktur dahinter extrahiert. Dies stellt in der Regel eine Herausforderung dar: Wie kombiniert man Merkmale aus mehreren ModalitĂ€ten und beseitigt zufĂ€llige Mehrdeutigkeiten? Diese Doktorarbeit befasst sich mit drei Hauptachsen multi-modaler Segmentierungsprobleme: Videosegmentierung und Objektentdeckung, AktivitĂ€tssegmentierung und –entdeckung, sowie Segmentierung von 3D Daten. FĂŒr die ersten beiden Achsen fĂŒhren wir nichtparametrische Bayessche AnsĂ€tze ein um multi-modale DatensĂ€tze wie Videos und Kontextsensor-Ströme zu segmentieren. Die vorgeschlagene Methode zeigt Vorteile in folgenden Bereichen: Integration multipler Merkmale und DatenabhĂ€ngigkeiten in probabilistischen Formulierungen, Bestimmung der Anzahl der Cluster und hierarchische, semantischen Partitionen, sowie die Beseitigung von Mehrdeutigkeiten in gemeinsamen Segmentierungen in Videos und Sensor-Strömen. Die dritte Achse konzentiert sich auf die robuste Nutzung von 3D Informationen fĂŒr verschiedene Anwendungen. So bietet die 3D-Wahrnehmung zum Beispiel reichere geometrische Strukturen und eine holistische Betrachtung der sichtbaren Szene. Die Untersuchungen, die in dieser Arbeit zur Nutzung verschiedener Arten von 3D-Daten vorgestellt werden, umfassen: die 3D-Objektsegmentierung auf Basis der Kinect Tiefenmessung, verbessert durch cross-modale Stereoverfahren, die Anpassung von 3D-CAD-Modellen auf Objekte in der 2D-Bildebene durch Ausnutzung der Differenzierbarkeit des HOG-Descriptors, die Segmentierung von Stereo-Videos, basierend auf adaptiven Ensemble-Modellen, sowie der Verschmelzung von 2D- Objektdetektoren mit 3D-Kontextinformationen fĂŒr ein Augmented-Reality Anwendungsszenario

    EM Algorithms for Weighted-Data Clustering with Application to Audio-Visual Scene Analysis

    Get PDF
    Data clustering has received a lot of attention and numerous methods, algorithms and software packages are available. Among these techniques, parametric finite-mixture models play a central role due to their interesting mathematical properties and to the existence of maximum-likelihood estimators based on expectation-maximization (EM). In this paper we propose a new mixture model that associates a weight with each observed point. We introduce the weighted-data Gaussian mixture and we derive two EM algorithms. The first one considers a fixed weight for each observation. The second one treats each weight as a random variable following a gamma distribution. We propose a model selection method based on a minimum message length criterion, provide a weight initialization strategy, and validate the proposed algorithms by comparing them with several state of the art parametric and non-parametric clustering techniques. We also demonstrate the effectiveness and robustness of the proposed clustering technique in the presence of heterogeneous data, namely audio-visual scene analysis.Comment: 14 pages, 4 figures, 4 table

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    Bayesian Modelling of Functional Whole Brain Connectivity

    Get PDF

    A Generalization of Otsu's Method and Minimum Error Thresholding

    Full text link
    We present Generalized Histogram Thresholding (GHT), a simple, fast, and effective technique for histogram-based image thresholding. GHT works by performing approximate maximum a posteriori estimation of a mixture of Gaussians with appropriate priors. We demonstrate that GHT subsumes three classic thresholding techniques as special cases: Otsu's method, Minimum Error Thresholding (MET), and weighted percentile thresholding. GHT thereby enables the continuous interpolation between those three algorithms, which allows thresholding accuracy to be improved significantly. GHT also provides a clarifying interpretation of the common practice of coarsening a histogram's bin width during thresholding. We show that GHT outperforms or matches the performance of all algorithms on a recent challenge for handwritten document image binarization (including deep neural networks trained to produce per-pixel binarizations), and can be implemented in a dozen lines of code or as a trivial modification to Otsu's method or MET.Comment: ECCV 202

    NewtonianVAE: Proportional Control and Goal Identification from Pixels via Physical Latent Spaces

    Get PDF
    Learning low-dimensional latent state space dynamics models has been a powerful paradigm for enabling vision-based planning and learning for control. We introduce a latent dynamics learning framework that is uniquely designed to induce proportional controlability in the latent space, thus enabling the use of much simpler controllers than prior work. We show that our learned dynamics model enables proportional control from pixels, dramatically simplifies and accelerates behavioural cloning of vision-based controllers, and provides interpretable goal discovery when applied to imitation learning of switching controllers from demonstration

    Activity report. 2015

    Get PDF
    • 

    corecore