5 research outputs found

    An efficient direct method for improving visual slam

    Get PDF
    Abstract-Traditionally in monocular SLAM, interest features are extracted and matched in successive images. Outliers are rejected a posteriori during a pose estimation process, and then the structure of the scene is reconstructed. In this paper, we propose a new approach for computing robustly and simultaneously the 3D camera displacement, the scene structure and the illumination changes directly from image intensity discrepancies. In this way, instead of depending on particular features, all possible image information is exploited. That problem is solved by using an efficient second-order optimization procedure and thus, high convergence rates and large domains of convergence are obtained. Furthermore, a new solution to the visual SLAM initialization problem is given whereby no assumptions are made either about the scene or the camera motion. The proposed approach is validated on experimental and simulated data. Comparisons with existing methods show significant performance improvements

    Localization and navigation of an autonomous mobile robot trough odometry and stereoscopic vision

    Get PDF
    Orientador: Paulo Roberto Gardel KurkaDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: Este trabalho apresenta a implementação de um sistema de navegação com visão estereoscópica em um robô móvel, que permite a construção de mapa de ambiente e localização. Para isto é necessário conhecer o modelo cinemático do robô, técnicas de controle, algoritmos de identificação de características em imagens (features), reconstrução 3D com visão estereoscópica e algoritmos de navegação. Utilizam-se métodos para a calibração de câmera desenvolvida no âmbito do grupo de pesquisa da FEM/UNICAMP e da literatura. Resultados de análises experimentais e teóricas são comparados. Resultados adicionais mostram a validação do algoritmo de calibração de câmera, acurácia dos sensores, resposta do sistema de controle, e reconstrução 3D. Os resultados deste trabalho são de importância para futuros estudos de navegação robótica e calibração de câmerasAbstract: This paper presents a navigation system with stereoscopic vision on a mobile robot, which allows the construction of environment map and location. In that way must know the kinematic model of the robot, algorithms for identifying features in images (features) as a Sift, 3D reconstruction with stereoscopic vision and navigation algorithms. Methods are used to calibrate the camera developed within the research group of the FEM / UNICAMP and literature. Results of experimental and theoretical analyzes are compared. Additional results show the validation of the algorithm for camera calibration, accuracy of sensors, control system response, and 3D reconstruction. These results are important for future studies of robotic navigation and calibration of camerasMestradoMecanica dos Sólidos e Projeto MecanicoMestre em Engenharia Mecânic

    Détection et suivi d'objets mobiles perçus depuis un capteur visuel embarqué

    Get PDF
    Cette thèse traite de la détection et du suivi d'objets mobiles dans un environnement dynamique, en utilisant une caméra embarquée sur un robot mobile. Ce sujet représente encore un défi important car on exploite uniquement la vision mono-caméra pour le résoudre. Nous devons détecter les objets mobiles dans la scène par une analyse de leurs déplacements apparents dans les images, en excluant le mouvement propre de la caméra. Dans une première étape, nous proposons une analyse spatio-temporelle de la séquence d'images, sur la base du flot optique épars. La méthode de clustering a contrario permet le groupement des points dynamiques, sans information a priori sur le nombre de groupes à former et sans réglage de paramètres. La réussite de cette méthode réside dans une accumulation suffisante des données pour bien caractériser la position et la vitesse des points. Nous appelons temps de pistage, le temps nécessaire pour acquérir les images analysées pour bien caractériser les points. Nous avons développé une carte probabiliste afin de trouver les zones dans l'image qui ont les probabilités la plus grandes de contenir un objet mobile. Cette carte permet la sélection active de nouveaux points près des régions détectées précédemment en permettant d'élargir la taille de ces régions. Dans la deuxième étape nous mettons en oeuvre une approche itérative pour exécuter détection, clustering et suivi sur des séquences d'images acquises depuis une caméra fixe en intérieur et en extérieur. Un objet est représenté par un contour actif qui est mis à jour de sorte que le modèle initial reste à l'intérieur du contour. Finalement nous présentons des résultats expérimentaux sur des images acquises depuis une caméra embarquée sur un robot mobile se déplaçant dans un environnement extérieur avec des objets mobiles rigides et non rigides. Nous montrons que la méthode est utilisable pour détecter des obstacles pendant la navigation dans un environnement inconnu a priori, d'abord pour des faibles vitesses, puis pour des vitesses plus réalistes après compensation du mouvement propre du robot dans les images.This dissertation concerns the detection and the tracking of mobile objets in a dynamic environment, using a camera embedded on a mobile robot. It is an important challenge because only a single camera is used to solve the problem.We must detect mobile objects in the scene, analyzing their apparent motions on images, excluding the motion caused by the ego-motion of the camera. First it is proposed a spatio-remporal analysis of the image sequence based on the sparse optical flow. The a contrario clustering method provides the grouping of dynamic points, without using a priori information and without parameter tuning. This method success is based on the accretion of sufficient information on positions and velocities of these points. We call tracking time, the time required in order to acquire images analyzed to provide the points characterization. A probabilistic map is built in order to find image areas with the higher probabilities to find a mobile objet; this map allows an active selection of new points close the previously detected mobile regions, making larger these regions. In a second step, it is proposed an iterative approach to perform the detection-clustering-tracking process on image sequences acquired from a fixed camera for indoor or outdoor applications. An object is described by an active contour, updated so that the initial object model remains inside the contour. Finally it is presented experimental results obtained on images acquired from a camera embedded on a mobile robot navigating in outdoor environments with rigid or non rigid mobile objects ; it is shown that the method works to detect obstacles during the navigation in a priori unknown environments, first with a weak speed, then with more a realistic speed, compensating the robot ego-motion in images

    An Efficient Direct Method for Improving visual SLAM

    No full text

    Visual Odometry In Mobile Robots

    No full text
    The paper presents an application of visual odometry, through reconstruction of the path of a mobile robot, using a stereoscopic camera system. The scale invariant feature transformation algorithm (SIFT), is used to process the images and locate keypoints in a 3D euclidian coordinates space. The path of a Pioneer mobile robot is estimated using the proposed technique. © 2011 IEEE.Lower, D.G., Distinctive image features from scale-invariant keypoints (2004) International Journal of Computer VisionSmith, M.R., Estimating uncertain spatial relationships in robôics (1990) Auton. Robô Veh., 8, pp. 167-193. , PSilveira, G., Malis, E., Rives, An efficient direct approach to visual SLAM (2008) IEEE-j-ro, 24, pp. 969-979. , P. 5Silveira, G., Malis, E., Rives, (2007) An Efficient Direct Method for Improving Visual SLAM, pp. 4090-4095. , PDavison, A.J., Murray, D.W., Simultaneous localization and mapbuilding using active vision (2002) IEEE-j-pami, 24, pp. 865-880. , 7Zhu, Z., Keeping smart, omnidirectional eyes on you [adaptive panoramic stereovision] (2004) IEEE Robôics & Automation Magazine, 11, pp. 69-78. , 4Saeedi, P., Lawrence, P.D., Lowe, D.G., Vision-based 3-D trajectory tracking for unknown environments (2006) IEEE-j-ro, 22, pp. 119-136. , 1Steder, B., Visual SLAM for flying vehicles (2008) IEEE-j-ro, 24, pp. 1088-1093. , 5Paz, L.M., Large-scale 6-DOF SLAM with stereo-in-hand (2008) IEEE-j-ro, 24, pp. 946-957. , 5Mahon, I., Efficient view-based SLAM using visual loop closures (2008) IEEE-j-ro, 24, pp. 1002-1014. , 5Hebert, M., Kanade, T., 3-D vision for outdoor navigation by an autonomous vehicle (1998) Proc. Image Understanding Workshop, pp. 593-601. , San Mateo, CA, aprilKriegman, D.J., Triendl, E., Binford, T.O., Stereo vision and navigation in building for mobile robots (1989) IEEE Transaction on Robotics and Automation, 5 (6), pp. 792-803Thorpe, C.E., Hebert, M., Kanade, T., Shafer, S., Vision and navigation for the carnegie-mellon navlab (1988) IEEE Trans. Pattern Anal. Mach. Intell., 10 (3), pp. 362-373. , MarTurk, M.A., Morgenthaler, D.G., Gremban, K.D., Marra, M., VITS - A vision system for autonomous land vehicle navigation (1988) IEEE Trans. Pattern Anal. Mach. Intell., 3, pp. 342-361. , MarWaxman, A.M., A visual navigation system for autonomous land vehicles (1987) IEEE J. Robot. Autom., RA-3 (2), pp. 124-141. , AprMa, Y., Kosecká, J., Sastry, S.S., Vision guided navigation for a nonholonomic mobile robot (1999) IEEE Transactions on Robotics and Automation, 15 (3). , JunChoomuang, R., Afzulpurkar, N., Hybrid Kalman filter/fuzzy logic basead position control of autonomous mobile robot (2005) International Journal of Advanced Robotic System, 2 (3), pp. 197-208Karlsson, N., Di Bernado, E., Ostrowski, J., Gonçalves, L., Pirjanian, P., Munich, M.E., The vSLAM algorithm for robust localization and mapping (2005) IEEE, International Conference on Robotics and Automation, pp. 24-29. , Barcelona, SpainHeikkilä, J., Geometric camera calibration using circular control points (2000) IEEE Transactions on Pattern Analysis and Machine Intelligence, 22 (10), pp. 1066-1077. , OctVedaldi, A., Fulkerson, B., (2008) An Open and Portable Library of Computer Vision Algorithms, , http://www.vlfeat.org
    corecore