23 research outputs found

    Video-based motion detection for stationary and moving cameras

    Get PDF
    In real world monitoring applications, moving object detection remains to be a challenging task due to factors such as background clutter and motion, illumination variations, weather conditions, noise, and occlusions. As a fundamental first step in many computer vision applications such as object tracking, behavior understanding, object or event recognition, and automated video surveillance, various motion detection algorithms have been developed ranging from simple approaches to more sophisticated ones. In this thesis, we present two moving object detection frameworks. The first framework is designed for robust detection of moving and static objects in videos acquired from stationary cameras. This method exploits the benefits of fusing a motion computation method based on spatio-temporal tensor formulation, a novel foreground and background modeling scheme, and a multi-cue appearance comparison. This hybrid system can handle challenges such as shadows, illumination changes, dynamic background, stopped and removed objects. Extensive testing performed on the CVPR 2014 Change Detection benchmark dataset shows that FTSG outperforms most state-of-the-art methods. The second framework adapts moving object detection to full motion videos acquired from moving airborne platforms. This framework has two main modules. The first module stabilizes the video with respect to a set of base-frames in the sequence. The stabilization is done by estimating four-point homographies using prominent feature (PF) block matching, motion filtering and RANSAC for robust matching. Once the frame to base frame homographies are available the flux tensor motion detection module using local second derivative information is applied to detect moving salient features. Spurious responses from the frame boundaries and other post- processing operations are applied to reduce the false alarms and produce accurate moving blob regions that will be useful for tracking

    Object-Aware Tracking and Mapping

    Get PDF
    Reasoning about geometric properties of digital cameras and optical physics enabled researchers to build methods that localise cameras in 3D space from a video stream, while – often simultaneously – constructing a model of the environment. Related techniques have evolved substantially since the 1980s, leading to increasingly accurate estimations. Traditionally, however, the quality of results is strongly affected by the presence of moving objects, incomplete data, or difficult surfaces – i.e. surfaces that are not Lambertian or lack texture. One insight of this work is that these problems can be addressed by going beyond geometrical and optical constraints, in favour of object level and semantic constraints. Incorporating specific types of prior knowledge in the inference process, such as motion or shape priors, leads to approaches with distinct advantages and disadvantages. After introducing relevant concepts in Chapter 1 and Chapter 2, methods for building object-centric maps in dynamic environments using motion priors are investigated in Chapter 5. Chapter 6 addresses the same problem as Chapter 5, but presents an approach which relies on semantic priors rather than motion cues. To fully exploit semantic information, Chapter 7 discusses the conditioning of shape representations on prior knowledge and the practical application to monocular, object-aware reconstruction systems

    Rekonstruktion und skalierbare Detektion und Verfolgung von 3D Objekten

    Get PDF
    The task of detecting objects in images is essential for autonomous systems to categorize, comprehend and eventually navigate or manipulate its environment. Since many applications demand not only detection of objects but also the estimation of their exact poses, 3D CAD models can prove helpful since they provide means for feature extraction and hypothesis refinement. This work, therefore, explores two paths: firstly, we will look into methods to create richly-textured and geometrically accurate models of real-life objects. Using these reconstructions as a basis, we will investigate on how to improve in the domain of 3D object detection and pose estimation, focusing especially on scalability, i.e. the problem of dealing with multiple objects simultaneously.Objekterkennung in Bildern ist für ein autonomes System von entscheidender Bedeutung, um seine Umgebung zu kategorisieren, zu erfassen und schließlich zu navigieren oder zu manipulieren. Da viele Anwendungen nicht nur die Erkennung von Objekten, sondern auch die Schätzung ihrer exakten Positionen erfordern, können sich 3D-CAD-Modelle als hilfreich erweisen, da sie Mittel zur Merkmalsextraktion und Verfeinerung von Hypothesen bereitstellen. In dieser Arbeit werden daher zwei Wege untersucht: Erstens werden wir Methoden untersuchen, um strukturreiche und geometrisch genaue Modelle realer Objekte zu erstellen. Auf der Grundlage dieser Konstruktionen werden wir untersuchen, wie sich der Bereich der 3D-Objekterkennung und der Posenschätzung verbessern lässt, wobei insbesondere die Skalierbarkeit im Vordergrund steht, d.h. das Problem der gleichzeitigen Bearbeitung mehrerer Objekte

    The attentive robot companion: learning spatial information from observation and verbal interaction

    Get PDF
    Ziegler L. The attentive robot companion: learning spatial information from observation and verbal interaction. Bielefeld: Universität Bielefeld; 2015.This doctoral thesis investigates how a robot companion can gain a certain degree of situational awareness through observation and interaction with its surroundings. The focus lies on the representation of the spatial knowledge gathered constantly over time in an indoor environment. However, from the background of research on an interactive service robot, methods for deployment in inference and verbal communication tasks are presented. The design and application of the models are guided by the requirements of referential communication. The approach here involves the analysis of the dynamic properties of structures in the robot’s field of view allowing it to distinguish objects of interest from other agents and background structures. The use of multiple persistent models representing these dynamic properties enables the robot to track changes in multiple scenes over time to establish spatial and temporal references. This work includes building a coherent representation considering allocentric and egocentric aspects of spatial knowledge for these models. Spatial analysis is extended with a semantic interpretation of objects and regions. This top-down approach for generating additional context information enhances the grounding process in communication. A holistic, boosting-based classification approach using a wide range of 2D and 3D visual features anchored in the spatial representation allows the system to identify room types. The process of grounding referential descriptions from a human interlocutor in the spatial representation is evaluated through referencing furniture. This method uses a probabilistic network for handling ambiguities in the descriptions and employs a strategy for resolving conflicts. In order to approve the real-world applicability of these approaches, this system was deployed on the mobile robot BIRON in a realistic apartment scenario involving observation and verbal interaction with an interlocutor

    Object-level dynamic SLAM

    Get PDF
    Visual Simultaneous Localisation and Mapping (SLAM) can estimate a camera's pose in an unknown environment and reconstruct an online map of it. Despite the advances in many real-time dense SLAM systems, most still assume a static environment, which is not a valid assumption in many real-world scenarios. This thesis aims to enable dense visual SLAM to run robustly in a dynamic environment, knowing where the sensor is in the environment, and, also importantly, what and where objects are in the surrounding environment for better scene understanding. The contributions in this thesis are threefold. The first one presents one of the first object-level dynamic SLAM systems that robustly track camera pose while detecting, tracking, and reconstructing all the objects in dynamic scenes. It can continuously fuse geometric, semantic, and motion information for each object into an octree-based volumetric representation. One of the challenges in tracking moving objects is that the object motion can easily break the illumination constancy assumption. In our second contribution, we address this issue by proposing a dense feature-metric alignment to robustly estimate camera and object poses. We will show how to learn dense feature maps and feature-metric uncertainties in a self-supervised way. They formulate a probabilistic feature-metric residual, which can be efficiently solved using Gauss-Newton optimisation and easily coupled with other residuals. So far, we can only reconstruct objects' geometry from the sensor data. Our third contribution further incorporates category-level shape prior to the object mapping. Conditioning on the depth measurement, the learned implicit function completes the unseen part while reconstructing the observed part accurately. It can yield better reconstruction completeness and more accurate object pose estimation. These three contributions in this thesis have advanced the state of the art in visual SLAM. We hope such object-level dynamic SLAM systems will help robots intelligently interact with the human-existing world.Open Acces

    Detection and identification of elliptical structure arrangements in images: theory and algorithms

    Get PDF
    Cette thèse porte sur différentes problématiques liées à la détection, l'ajustement et l'identification de structures elliptiques en images. Nous plaçons la détection de primitives géométriques dans le cadre statistique des méthodes a contrario afin d'obtenir un détecteur de segments de droites et d'arcs circulaires/elliptiques sans paramètres et capable de contrôler le nombre de fausses détections. Pour améliorer la précision des primitives détectées, une technique analytique simple d'ajustement de coniques est proposée ; elle combine la distance algébrique et l'orientation du gradient. L'identification d'une configuration de cercles coplanaires en images par une signature discriminante demande normalement la rectification Euclidienne du plan contenant les cercles. Nous proposons une technique efficace de calcul de la signature qui s'affranchit de l'étape de rectification ; elle est fondée exclusivement sur des propriétés invariantes du plan projectif, devenant elle même projectivement invariante. ABSTRACT : This thesis deals with different aspects concerning the detection, fitting, and identification of elliptical features in digital images. We put the geometric feature detection in the a contrario statistical framework in order to obtain a combined parameter-free line segment, circular/elliptical arc detector, which controls the number of false detections. To improve the accuracy of the detected features, especially in cases of occluded circles/ellipses, a simple closed-form technique for conic fitting is introduced, which merges efficiently the algebraic distance with the gradient orientation. Identifying a configuration of coplanar circles in images through a discriminant signature usually requires the Euclidean reconstruction of the plane containing the circles. We propose an efficient signature computation method that bypasses the Euclidean reconstruction; it relies exclusively on invariant properties of the projective plane, being thus itself invariant under perspective

    View point robust visual search technique

    Get PDF
    In this thesis, we have explored visual search techniques for images taken from diferent view points and have tried to enhance the matching capability under view point changes. We have proposed the Homography based back-projection as post-processing stage of Compact Descriptors for Visual Search (CDVS), the new MPEG standard; moreover, we have deined the aine adapted scale space based aine detection, which steers the Gaussian scale space to capture the features from aine transformed images; we have also developed the corresponding gradient based aine descriptor. Using these proposed techniques, the image retrieval robustness to aine transformations has been signiicantly improved. The irst chapter of this thesis introduces the background on visual search. In the second chapter, we propose a homography based back-projection used as the postprocessing stage of CDVS to improve the resilience to view point changes. The theory behind this proposal is that each perspective projection of the image of 2D object can be simulated as an aine transformation. Each pair of aine transformations are mathematically related by homography matrix. Given that matrix, the image can be back-projected to simulate the image of another view point. In this way, the real matched images can then be declared as matching because the perspective distortion has been reduced by the back-projection. An accurate homography estimation from the images of diferent view point requires at least 4 correspondences, which could be ofered by the CDVS pipeline. In this way, the homography based back-projection can be used to scrutinize the images with not enough matched keypoints. If they contain some homography relations, the perspective distortion can then be reduced exploiting the few provided correspondences. In the experiment, this technique has been proved to be quite efective especially to the 2D object images. The third chapter introduces the scale space, which is also the kernel to the feature detection for the scale invariant visual search techniques. Scale space, which is made by a series of Gaussian blurred images, represents the image structures at diferent level of details. The Gaussian smoothed images in the scale space result in feature detection being not invariant to aine transformations. That is the reason why scale invariant visual search techniques are sensitive to aine transformations. Thus, in this chapter, we propose an aine adapted scale space, which employs the aine steered Gaussian ilters to smooth the images. This scale space is lexible to diferent aine transformations and it well represents the image structures from diferent view points. With the help of this structure, the features from diferent view points can be well captured. In practice, the scale invariant visual search techniques have employed a pyramid structure to speed up the construction. By employing the aine Gaussian scale space principles, we also propose two structures to build the aine Gaussian scale space. The structure of aine Gaussian scale space is similar to the pyramid structure because of the similiar sampling and cascading iii properties. Conversely, the aine Laplacian of Gaussian (LoG) structure is completely diferent. The Laplacian operator, under aine transformation, is hard to be aine deformed. Diferently from a simple Laplacian operation on the scale space to build the general LoG construction, the aine LoG can only be obtained by aine LoG convolution and the cascade implementations on the aine scale space. Using our proposed structures, both the aine Gaussian scale space and aine LoG can be constructed. We have also explored the aine scale space implementation in frequency domain. In the second chapter, we will also explore the spectrum of Gaussian image smoothing under the aine transformation, and propose two structures. General speaking, the implementation in frequency domain is more robust to aine transformations at the expense of a higher computational complexity. It makes sense to adopt an aine descriptor for the aine invariant visual search. In the fourth chapter, we will propose an aine invariant feature descriptor based on aine gradient. Currently, the state of the art feature descriptors, including SIFT and Gradient location and orientation histogram (GLOH), are based on the histogram of image gradient around the detected features. If the image gradient is calculated as the diference of the adjacent pixels, it will not be aine invariant. Thus in that chapter, we irst propose an aine gradient which will contribute the aine invariance to the descriptor. This aine gradient will be calculated directly by the derivative of the aine Gaussian blurred images. To simplify the processing, we will also create the corresponding aine Gaussian derivative ilters for diferent detected scales to quickly generate the aine gradient. With this aine gradient, we can apply the same scheme of SIFT descriptor to generate the gradient histogram. By normalizing the histogram, the aine descriptor can then be formed. This aine descriptor is not only aine invariant but also rotation invariant, because the direction of the area to form the histogram is determined by the main direction of the gradient around the features. In practice, this aine descriptor is fully aine invariant and its performance for image matching is extremely good. In the conclusions chapter, we draw some conclusions and describe some future work

    The robot's vista space : a computational 3D scene analysis

    Get PDF
    Swadzba A. The robot's vista space : a computational 3D scene analysis. Bielefeld (Germany): Bielefeld University; 2011.The space that can be explored quickly from a fixed view point without locomotion is known as the vista space. In indoor environments single rooms and room parts follow this definition. The vista space plays an important role in situations with agent-agent interaction as it is the directly surrounding environment in which the interaction takes place. A collaborative interaction of the partners in and with the environment requires that both partners know where they are, what spatial structures they are talking about, and what scene elements they are going to manipulate. This thesis focuses on the analysis of a robot's vista space. Mechanisms for extracting relevant spatial information are developed which enable the robot to recognize in which place it is, to detect the scene elements the human partner is talking about, and to segment scene structures the human is changing. These abilities are addressed by the proposed holistic, aligned, and articulated modeling approach. For a smooth human-robot interaction, the computed models should be aligned to the partner's representations. Therefore, the design of the computational models is based on the combination of psychological results from studies on human scene perception with basic physical properties of the perceived scene and the perception itself. The holistic modeling realizes a categorization of room percepts based on the observed 3D spatial layout. Room layouts have room type specific features and fMRI studies have shown that some of the human brain areas being active in scene recognition are sensitive to the 3D geometry of a room. With the aligned modeling, the robot is able to extract the hierarchical scene representation underlying a scene description given by a human tutor. Furthermore, it is able to ground the inferred scene elements in its own visual perception of the scene. This modeling follows the assumption that cognition and language schematize the world in the same way. This is visible in the fact that a scene depiction mainly consists of relations between an object and its supporting structure or between objects located on the same supporting structure. Last, the articulated modeling equips the robot with a methodology for articulated scene part extraction and fast background learning under short and disturbed observation conditions typical for human-robot interaction scenarios. Articulated scene parts are detected model-less by observing scene changes caused by their manipulation. Change detection and background learning are closely coupled because change is defined phenomenologically as variation of structure. This means that change detection involves a comparison of currently visible structures with a representation in memory. In range sensing this comparison can be nicely implement as subtraction of these two representations. The three modeling approaches enable the robot to enrich its visual perceptions of the surrounding environment, the vista space, with semantic information about meaningful spatial structures useful for further interaction with the environment and the human partner

    Improving the A-Contrario computation of a fundamental matrix in computer vision

    Get PDF
    Laboratoire MAP5 (Mathématiques appliquées Paris 5), CNRS UMR8145 Université Paris V - Paris DescartesThe fundamental matrix is a two-view tensor playing a central role in Computer Vision geometry. We address its robust estimation given pairs of matched image features, affected by noise and outliers, which searches for a maximal subset of correct matches and the associated fundamental matrix. Overcoming the broadly used parametric RANSAC method, ORSA follows a probabilistic a contrario approach to look for the set of matches being least expected with respect to a uniform random distribution of image points. ORSA lacks performance when this assumption is clearly violated. We will propose an improvement of the ORSA method, based on its same a contrario framework and the use of a non-parametric estimate of the distribution of image features. The role and estimation of the fundamental matrix and the data SIFT matches will be carefully explained with examples. Our proposal performs significantly well for common scenarios of low inlier ratios and local feature concentrations
    corecore