552 research outputs found

    Fast unsupervised multiresolution color image segmentation using adaptive gradient thresholding and progressive region growing

    Get PDF
    In this thesis, we propose a fast unsupervised multiresolution color image segmentation algorithm which takes advantage of gradient information in an adaptive and progressive framework. This gradient-based segmentation method is initialized by a vector gradient calculation on the full resolution input image in the CIE L*a*b* color space. The resultant edge map is used to adaptively generate thresholds for classifying regions of varying gradient densities at different levels of the input image pyramid, obtained through a dyadic wavelet decomposition scheme. At each level, the classification obtained by a progressively thresholded growth procedure is combined with an entropy-based texture model in a statistical merging procedure to obtain an interim segmentation. Utilizing an association of a gradient quantized confidence map and non-linear spatial filtering techniques, regions of high confidence are passed from one level to another until the full resolution segmentation is achieved. Evaluation of our results on several hundred images using the Normalized Probabilistic Rand (NPR) Index shows that our algorithm outperforms state-of the art segmentation techniques and is much more computationally efficient than its single scale counterpart, with comparable segmentation quality

    Identifying locations from geospatial trajectories

    Get PDF
    Harnessing the latent knowledge present in geospatial trajectories allows for the potential to revolutionise our understanding of behaviour. This paper discusses one component of such analysis, namely the extraction of significant locations. Specifically, we: (i) present the Gradient-based Visit Extractor (GVE) algorithm capable of extracting periods of low mobility from geospatial data, while maintaining resilience to noise, and addressing the drawbacks of existing techniques, (ii) provide a comprehensive analysis of the properties of these visits and consequent locations, extracted through clustering, and (iii) demonstrate the applicability of GVE to the problem of visit extraction with respect to representative use-cases

    Target classification in multimodal video

    Get PDF
    The presented thesis focuses on enhancing scene segmentation and target recognition methodologies via the mobilisation of contextual information. The algorithms developed to achieve this goal utilise multi-modal sensor information collected across varying scenarios, from controlled indoor sequences to challenging rural locations. Sensors are chiefly colour band and long wave infrared (LWIR), enabling persistent surveillance capabilities across all environments. In the drive to develop effectual algorithms towards the outlined goals, key obstacles are identified and examined: the recovery of background scene structure from foreground object ’clutter’, employing contextual foreground knowledge to circumvent training a classifier when labeled data is not readily available, creating a labeled LWIR dataset to train a convolutional neural network (CNN) based object classifier and the viability of spatial context to address long range target classification when big data solutions are not enough. For an environment displaying frequent foreground clutter, such as a busy train station, we propose an algorithm exploiting foreground object presence to segment underlying scene structure that is not often visible. If such a location is outdoors and surveyed by an infra-red (IR) and visible band camera set-up, scene context and contextual knowledge transfer allows reasonable class predictions for thermal signatures within the scene to be determined. Furthermore, a labeled LWIR image corpus is created to train an infrared object classifier, using a CNN approach. The trained network demonstrates effective classification accuracy of 95% over 6 object classes. However, performance is not sustainable for IR targets acquired at long range due to low signal quality and classification accuracy drops. This is addressed by mobilising spatial context to affect network class scores, restoring robust classification capability

    Bayesian non-parametrics for multi-modal segmentation

    Get PDF
    Segmentation is a fundamental and core problem in computer vision research which has applications in many tasks, such as object recognition, content-based image retrieval, and semantic labelling. To partition the data into groups coherent in one or more characteristics such as semantic classes, is often a first step towards understanding the content of data. As information in the real world is generally perceived in multiple modalities, segmentation performed on multi-modal data for extracting the latent structure usually encounters a challenge: how to combine features from multiple modalities and resolve accidental ambiguities. This thesis tackles three main axes of multi-modal segmentation problems: video segmentation and object discovery, activity segmentation and discovery, and segmentation in 3D data. For the first two axes, we introduce non-parametric Bayesian approaches for segmenting multi-modal data collections, including groups of videos and context sensor streams. The proposed method shows benefits on: integrating multiple features and data dependencies in a probabilistic formulation, inferring the number of clusters from data and hierarchical semantic partitions, as well as resolving ambiguities by joint segmentation across videos or streams. The third axis focuses on the robust use of 3D information for various applications, as 3D perception provides richer geometric structure and holistic observation of the visual scene. The studies covered in this thesis for utilizing various types of 3D data include: 3D object segmentation based on Kinect depth sensing improved by cross-modal stereo, matching 3D CAD models to objects on 2D image plane by exploiting the differentiability of the HOG descriptor, segmenting stereo videos based on adaptive ensemble models, and fusing 2D object detectors with 3D context information for an augmented reality application scenario.Segmentierung ist ein zentrales problem in der Computer Vision Forschung mit Anwendungen in vielen Bereichen wie der Objekterkennung, der inhaltsbasierten Bildsuche und dem semantischen Labelling. Daten in Gruppen zu partitionieren, die in einer oder mehreren Eigenschaften wie zum Beispiel der semantischen Klasse übereinstimmen, ist oft ein erster Schritt in Richtung Inhaltsanalyse. Da Informationen in der realen Welt im Allgemeinen multi-modal wahrgenommen werden, wird die Segmentierung auf multi-modale Daten angewendet und die latente Struktur dahinter extrahiert. Dies stellt in der Regel eine Herausforderung dar: Wie kombiniert man Merkmale aus mehreren Modalitäten und beseitigt zufällige Mehrdeutigkeiten? Diese Doktorarbeit befasst sich mit drei Hauptachsen multi-modaler Segmentierungsprobleme: Videosegmentierung und Objektentdeckung, Aktivitätssegmentierung und –entdeckung, sowie Segmentierung von 3D Daten. Für die ersten beiden Achsen führen wir nichtparametrische Bayessche Ansätze ein um multi-modale Datensätze wie Videos und Kontextsensor-Ströme zu segmentieren. Die vorgeschlagene Methode zeigt Vorteile in folgenden Bereichen: Integration multipler Merkmale und Datenabhängigkeiten in probabilistischen Formulierungen, Bestimmung der Anzahl der Cluster und hierarchische, semantischen Partitionen, sowie die Beseitigung von Mehrdeutigkeiten in gemeinsamen Segmentierungen in Videos und Sensor-Strömen. Die dritte Achse konzentiert sich auf die robuste Nutzung von 3D Informationen für verschiedene Anwendungen. So bietet die 3D-Wahrnehmung zum Beispiel reichere geometrische Strukturen und eine holistische Betrachtung der sichtbaren Szene. Die Untersuchungen, die in dieser Arbeit zur Nutzung verschiedener Arten von 3D-Daten vorgestellt werden, umfassen: die 3D-Objektsegmentierung auf Basis der Kinect Tiefenmessung, verbessert durch cross-modale Stereoverfahren, die Anpassung von 3D-CAD-Modellen auf Objekte in der 2D-Bildebene durch Ausnutzung der Differenzierbarkeit des HOG-Descriptors, die Segmentierung von Stereo-Videos, basierend auf adaptiven Ensemble-Modellen, sowie der Verschmelzung von 2D- Objektdetektoren mit 3D-Kontextinformationen für ein Augmented-Reality Anwendungsszenario

    Bayesian non-parametrics for multi-modal segmentation

    Get PDF
    Segmentation is a fundamental and core problem in computer vision research which has applications in many tasks, such as object recognition, content-based image retrieval, and semantic labelling. To partition the data into groups coherent in one or more characteristics such as semantic classes, is often a first step towards understanding the content of data. As information in the real world is generally perceived in multiple modalities, segmentation performed on multi-modal data for extracting the latent structure usually encounters a challenge: how to combine features from multiple modalities and resolve accidental ambiguities. This thesis tackles three main axes of multi-modal segmentation problems: video segmentation and object discovery, activity segmentation and discovery, and segmentation in 3D data. For the first two axes, we introduce non-parametric Bayesian approaches for segmenting multi-modal data collections, including groups of videos and context sensor streams. The proposed method shows benefits on: integrating multiple features and data dependencies in a probabilistic formulation, inferring the number of clusters from data and hierarchical semantic partitions, as well as resolving ambiguities by joint segmentation across videos or streams. The third axis focuses on the robust use of 3D information for various applications, as 3D perception provides richer geometric structure and holistic observation of the visual scene. The studies covered in this thesis for utilizing various types of 3D data include: 3D object segmentation based on Kinect depth sensing improved by cross-modal stereo, matching 3D CAD models to objects on 2D image plane by exploiting the differentiability of the HOG descriptor, segmenting stereo videos based on adaptive ensemble models, and fusing 2D object detectors with 3D context information for an augmented reality application scenario.Segmentierung ist ein zentrales problem in der Computer Vision Forschung mit Anwendungen in vielen Bereichen wie der Objekterkennung, der inhaltsbasierten Bildsuche und dem semantischen Labelling. Daten in Gruppen zu partitionieren, die in einer oder mehreren Eigenschaften wie zum Beispiel der semantischen Klasse übereinstimmen, ist oft ein erster Schritt in Richtung Inhaltsanalyse. Da Informationen in der realen Welt im Allgemeinen multi-modal wahrgenommen werden, wird die Segmentierung auf multi-modale Daten angewendet und die latente Struktur dahinter extrahiert. Dies stellt in der Regel eine Herausforderung dar: Wie kombiniert man Merkmale aus mehreren Modalitäten und beseitigt zufällige Mehrdeutigkeiten? Diese Doktorarbeit befasst sich mit drei Hauptachsen multi-modaler Segmentierungsprobleme: Videosegmentierung und Objektentdeckung, Aktivitätssegmentierung und –entdeckung, sowie Segmentierung von 3D Daten. Für die ersten beiden Achsen führen wir nichtparametrische Bayessche Ansätze ein um multi-modale Datensätze wie Videos und Kontextsensor-Ströme zu segmentieren. Die vorgeschlagene Methode zeigt Vorteile in folgenden Bereichen: Integration multipler Merkmale und Datenabhängigkeiten in probabilistischen Formulierungen, Bestimmung der Anzahl der Cluster und hierarchische, semantischen Partitionen, sowie die Beseitigung von Mehrdeutigkeiten in gemeinsamen Segmentierungen in Videos und Sensor-Strömen. Die dritte Achse konzentiert sich auf die robuste Nutzung von 3D Informationen für verschiedene Anwendungen. So bietet die 3D-Wahrnehmung zum Beispiel reichere geometrische Strukturen und eine holistische Betrachtung der sichtbaren Szene. Die Untersuchungen, die in dieser Arbeit zur Nutzung verschiedener Arten von 3D-Daten vorgestellt werden, umfassen: die 3D-Objektsegmentierung auf Basis der Kinect Tiefenmessung, verbessert durch cross-modale Stereoverfahren, die Anpassung von 3D-CAD-Modellen auf Objekte in der 2D-Bildebene durch Ausnutzung der Differenzierbarkeit des HOG-Descriptors, die Segmentierung von Stereo-Videos, basierend auf adaptiven Ensemble-Modellen, sowie der Verschmelzung von 2D- Objektdetektoren mit 3D-Kontextinformationen für ein Augmented-Reality Anwendungsszenario

    Multimodal Legal Information Retrieval

    Get PDF
    The goal of this thesis is to present a multifaceted way of inducing semantic representation from legal documents as well as accessing information in a precise and timely manner. The thesis explored approaches for semantic information retrieval (IR) in the Legal context with a technique that maps specific parts of a text to the relevant concept. This technique relies on text segments, using the Latent Dirichlet Allocation (LDA), a topic modeling algorithm for performing text segmentation, expanding the concept using some Natural Language Processing techniques, and then associating the text segments to the concepts using a semi-supervised text similarity technique. This solves two problems, i.e., that of user specificity in formulating query, and information overload, for querying a large document collection with a set of concepts is more fine-grained since specific information, rather than full documents is retrieved. The second part of the thesis describes our Neural Network Relevance Model for E-Discovery Information Retrieval. Our algorithm is essentially a feature-rich Ensemble system with different component Neural Networks extracting different relevance signal. This model has been trained and evaluated on the TREC Legal track 2010 data. The performance of our models across board proves that it capture the semantics and relatedness between query and document which is important to the Legal Information Retrieval domain

    Localisation of humans, objects and robots interacting on load-sensing floors

    Get PDF
    International audienceLocalisation, tracking and recognition of objects and humans are basic tasks that are of high value in applications of ambient intelligence. Sensing floors were introduced to address these tasks in a non-intrusive way. To recognize the humans moving on the floor, they are usually first localized, and then a set of gait features are extracted (stride length, cadence, pressure profile over a footstep). However, recognition generally fails when several people stand or walk together, preventing successful tracking. This paper presents a detection, tracking and recognition technique which uses objects' weight. It continues working even when tracking individual persons becomes impossible. Inspired by computer vision, this technique processes the floor pressure-image by segmenting the blobs containing objects, tracking them, and recognizing their contents through a mix of inference and combinatorial search. The result lists the probabilities of assignments of known objects to observed blobs. The concept was successfully evaluated in daily life activity scenarii, involving multi-object tracking and recognition on low resolution sensors, crossing of user trajectories, and weight ambiguity. This technique can be used to provide a probabilistic input for multi-modal object tracking and recognition systems
    corecore