15 research outputs found

    Reconstruction 3D de scènes dynamiques par segmentation au sens du mouvement

    No full text
    National audienceL'objectif de ce travail est de reconstruire les parties sta-tiques et dynamiques d'une scène 3D à l'aide d'un robot mobile équipé d'un capteur 3D. Cette reconstruction né-cessite la classification des points 3D acquis au cours du temps en point fixe et point mobile indépendamment du dé-placement du robot. Notre méthode de segmentation utilise directement les données 3D et étudie les mouvements des objets dans la scène sans hypothèse préalable. Nous déve-loppons un algorithme complet reconstruisant les parties fixes de la scène à chaque acquisition à l'aide d'un RAN-SAC qui ne requiert que 3 points pour recaler les nuages de points. La méthode a été expérimentée sur de larges scènes en extérieur. Par ailleurs, nous montrons sur les séquences tests KITTI que la prise en compte des données 3D per-met d'améliorer les approches 2D en levant les ambiguïtés dues à la perte d'une dimension dans les images. Mots-Clefs Estimation de pose, reconstruction 3D, SfM, segmentation au sens du mouvement

    Analyse du mouvement pour la reconstruction et la compréhension de scènes 3D dynamiques

    No full text
    This thesis studies the problem of dynamic scene 3D reconstruction and understanding using a calibrated 2D-3D camera setup mounted on a mobile platform via the analysis of objects' motions. For static scenes, the sought 3D map reconstruction can be obtained by registering the point cloud sequence. However, with dynamic scenes, we require a prior step of moving object elimination, which yields to the motion detection and segmentation problems. We provide solutions for the two practical scenarios, namely the known and unknown camera motion cases, respectively. When camera motion is unknown, our 3D-SSC and 3D-SMR algorithms segment the moving objects by analysing their 3D feature trajectories. In contrast, by compensating the known camera motion, our 3D Flow Field Analysis algorithm inspects the spatio-temporal property of the object's motion. By removing the dynamic objects, we attain the high quality 3D background and multi-body reconstruction by using our DW-ICP point cloud registration algorithm. In the context of scene understanding, semantic object information is learned from images and transferred to the reconstructed static map via our 2D-to-3D label transfer scheme. All the proposed algorithms have been quantitatively and qualitatively evaluated and validated by using extensive experiments of real outdoor scenes.Cette thèse s’intéresse au problème de l’analyse et de la reconstruction 3D de scènes fortement dynamiques à partir un système de vision monté sur une plateforme mobile. Ce capteur calibré acquière à chaque instant la photométrie (image 2D) et la carte de profondeur (information 3D). Lorsque la scène est statique, la reconstruction 3D peut être facilement obtenue par recalage du nuage de points 3D tout au long de l’acquisition. Cependant, dès que des objets mobiles sont présents dans la scène, il est nécessaire de les détecter a priori pour reconstruire le 3D de la scène. Dans ce travail, nous proposons de résoudre le problème dans deux cas pratiques. Dans un premier temps, nous abordons le cas où le mouvement de la caméra est inconnu. Dans ce cas, nous développons des méthodes originales nommées 3D-SCC (3D-based Sparse Subspace Clustering) et 3D –SMR (3D-based SMooth Representation) basées sur la segmentation au sens du mouvement à partir de l’analyse des points 3D. Dans un second temps, lorsque le mouvement de la caméra est supposé connu, ce mouvement est compensé puis une analyse spatio-temporelle fine du champ de mouvement 3D est réalisée pour extraire les objets mobiles. En retirant ces objets, nous montrons que nous sommes capables de reconstruire à la fois les parties statiques de la scène et les parties mobiles de façon très précise grâce à une nouvelle approche de recalage des données (DW-ICP) qui combine ICP et RANSAC. Enfin ces méthodes sont validées pour l’analyse de scène où les données sémantiques sont apprises sur les images 2D et transférées sur les cartes 3D ainsi reconstruites. Tous les algorithmes, proposés dans ce travail, sont validés de manière quantitative et qualitative sur de nombreuses séquences réelles

    Dynamic 3D Scene Reconstruction and Enhancement

    No full text
    International audienceIn this paper, we present a 3D reconstruction and enhancement approach for high quality dynamic city scene reconstructions. We first detect and segment the moving objects using 3D Motion Segmenta-tion approach by exploiting the feature trajectories' behaviours. Getting the segmentations of both the dynamic scene parts and the static scene parts, we propose an efficient point cloud registration approach which takes the advantages of 3-point RANSAC and Iterative Closest Points algorithms to produce precise point cloud alignment. Furthermore, we proposed a point cloud smoothing and texture mapping framework to enhance the results of reconstructions for both the static and the dynamic scene parts. The proposed algorithms are evaluated using the real-world challenging KITTI dataset with very satisfactory results

    3D Reconstruction of Dynamic Vehicles using Sparse 3D-Laser-Scanner and 2D Image Fusion

    No full text
    International audienceMap building becomes one of the most interesting research topic in computer vision field nowadays. To acquire accurate large 3D scene reconstructions, 3D laser scanners are recently developed and widely used. They produce accurate but sparse 3D point clouds of the environments. However, 3D reconstruction of rigidly moving objects along side with the large-scale 3D scene reconstruction is still lack of interest in many researches. To achieve a detailed object-level 3D reconstruction, a single scan of point cloud is insufficient due to their sparsity. For example, traditional Iterative Closest Point (ICP) registration technique or its variances are not accurate and robust enough to registered the point clouds, as they are easily trapped into the local minima. In this paper, we propose an 3-Point RANSAC with ICP refinement algorithm to build 3D reconstruction of rigidly moving objects, such as vehicles, using 2D-3D camera setup. Results show that the proposed algorithm can robustly and accurately registered the sparse 3D point cloud

    Hand Gestures Recognition and Tracking

    No full text
    In this project we develop a system that uses low cost web cameras to recognise gestures and track 2D orientations of the hand. This report is organized as such. First in section 2 we introduce various methods we undertook for hand detection. This is the most important step in hand gesture recognition. Results of various skin detection algorithms are discussed in length. This is followed by region extraction step (section 3). In this section approaches like contours and convex hull to extract region of interest which is hand are discussed. In section 4 a method is describe to recognize the open hand gesture. Two additional gestures of palm and fist are implemented using Haar-like features. These are discussed in section 5. In section 6 Kalman filter is introduced which tracks the centroid of hand region. The report is concluded by discussing about various issues related with the embraced approach (section 9) and future recommendations to improve the system is pointed out (section 10)

    High Quality Reconstruction of Dynamic Objects using 2D-3D Camera Fusion

    No full text
    International audienceIn this paper, we propose a complete pipeline for high quality reconstruction of dynamic objects using 2D-3D camera setup attached to a moving vehicle. Starting from the segmented motion trajectories of individual objects, we compute their precise motion parameters, register multiple sparse point clouds to increase the density, and develop a smooth and textured surface from the dense (but scattered) point cloud. The success of our method relies on the proposed optimization framework for accurate motion estimation between two sparse point clouds. Our formulation for fusing it closest-point and it consensus based motion estimations, respectively in the absence and presence of motion trajectories, is the key to obtain such accuracy. Several experiments performed on both synthetic and real (KITTI) datasets show that the proposed framework is very robust and accurate

    3D Reconstruction from Specialized Wide Field of View Camera System Using Unified Spherical Model

    No full text
    International audienceThis paper proposed a method of three dimensions (3D) reconstruction from a wide field of view(FoV) camera system. This camera system consists of two fisheye cameras each with 180 degrees FoV. The fisheye cameras placed back to back to obtain a full 360 degrees FoV. A stereo vision camera is placed to estimate the depth information of anterior view of the camera system. A novel calibration method using unified camera model representation has been proposed to calibrate the multiple camera systems. An effective fusion algorithm has been introduced to fuse multi-camera images by exploiting the overlapping area. Moreover, direct and fast 3D reconstruction of sparse feature matches based on the spherical representation are obtained using the proposed system

    Moving Object Detection by 3D Flow Field Analysis

    No full text
    International audienceMap-based localization and sensing are one of the key components in autonomous driving technologies, where high quality 3D map reconstruction is fundamentally utmost important. However, due to the highly dynamic and uncontrollable properties of real world environment, building a high quality 3D map is not straightforward and requires several strong assumptions. To address this challenge, we present a complete framework, which detects and extracts the moving objects from a sequence of unordered and texture-less point clouds, to build high quality static maps. To accurately detect the moving objects from data acquired with a possibly fast moving platform, we propose a novel 3D Flow Field Analysis approach in which we inspect the motion behaviour of the registered point sets. The proposed algorithm elegantly models the temporal and spatial displacement of the moving objects. Thus, both small moving objects (e.g. walking pedestrians) and large moving objects (e.g. moving trucks) can be detected effectively. Further, by incorporating the Sparse Subspace Clustering framework, we propose a Sparse Flow Clustering algorithm to group the 3D motion flows under both the constraints of motion similarity and spatial closeness. To this end, the static scene parts and the moving objects can be independently processed to achieve photo-realistic 3D reconstructions. Finally, we show that the proposed 3D Flow Field Analysis algorithm and the Sparse Flow Clustering approach are highly effective for motion detection and segmentation, as exemplified on the KITTI benchmark, and yield high quality reconstructed static-maps as well as rigidly moving objects

    Incomplete 3D Motion Trajectory Segmentation and 2D-to-3D Label Transfer for Dynamic Scene Analysis

    No full text
    International audienceThe knowledge of the static scene parts and the moving objects in a dynamic scene plays a vital role for scene modelling, understanding, and landmark-based robot navigation. The key information for these tasks lies on semantic labels of the scene parts and the motion trajectories of the dynamic objects. In this work, we propose a method that segments the 3D feature trajectories based on their motion behaviours, and assigns them semantic labels using 2D-to-3D label transfer. These feature trajectories are constructed by using the proposed trajectory recovery algorithm which takes the loss of feature tracking into account. We introduce a complete framework for static-map and dynamic objects' reconstruction, as well as semantic scene understanding for a calibrated and moving 2D-3D camera setup. Our motion segmentation approach is faster by two orders of magnitude, while performing better than the state-of-the-art 3D motion segmentation methods, and successfully handles the previously discarded incomplete trajectory scenarios

    Static-map and Dynamic Object Reconstruction in Outdoor Scenes using 3D Motion Segmentation

    No full text
    International audience—This paper aims to build the static-map of a dynamic scene using a mobile robot equipped with 3D sensors. The sought static-map consists of only the static scene parts, which has a vital role in scene understanding and landmark based navigation. Building static-map requires the categorization of moving and static objects. In this work, we propose a Sparse Subspace Clustering-based Motion Segmentation method that categories the static scene parts and the multiple moving objects using their 3D motion trajectories. Our motion segmentation method uses the raw trajectory data, allowing the objects to move in direct 3D space, without any projection model assumption or whatsoever. We also propose a complete pipeline for static-map building which estimates the inter-frame motion parameters by exploiting the minimal 3-point Random Sample Consensus algorithm on the feature correspondences only from the static scene parts. The proposed method has been especially designed and tested for large scene in real outdoor environments. On one hand, our 3D Motion Segmentation approach outperforms its 2D based counterparts, for extensive experiments on KITTI dataset. On the other hand, separately reconstructed static-maps and moving objects for various dynamic scenes are very satisfactory
    corecore