219 research outputs found

    Under vehicle perception for high level safety measures using a catadioptric camera system

    Get PDF
    In recent years, under vehicle surveillance and the classification of the vehicles become an indispensable task that must be achieved for security measures in certain areas such as shopping centers, government buildings, army camps etc. The main challenge to achieve this task is to monitor the under frames of the means of transportations. In this paper, we present a novel solution to achieve this aim. Our solution consists of three main parts: monitoring, detection and classification. In the first part we design a new catadioptric camera system in which the perspective camera points downwards to the catadioptric mirror mounted to the body of a mobile robot. Thanks to the catadioptric mirror the scenes against the camera optical axis direction can be viewed. In the second part we use speeded up robust features (SURF) in an object recognition algorithm. Fast appearance based mapping algorithm (FAB-MAP) is exploited for the classification of the means of transportations in the third part. Proposed technique is implemented in a laboratory environment

    Fast Central Catadioptric Line Extraction

    Get PDF
    International audienceLines are particularly important features for different tasks such as calibration, structure from motion, 3D reconstruction in computer vision. However, line detection in catadioptric images is not trivial because the projection of a 3D line is a conic eventually degenerated. If the sensor is calibrated, it has been already demonstrated that each conic can be described by two parameters. In this way, some methods based on the adaptation of conventional line detection methods have been proposed. However, most of these methods suffer from the same disadvantages than in the perspective case (computing time, accuracy, robustness, ...). In this paper, we then propose a new method for line detection in central catadioptric image comparable to the polygonal approximation approach. With this method, only two points of a chain allows to extract with a very high accuracy a catadioptric line. Moreover , this algorithm is particularly fast and is applicable in realtime. We also present experimental results with some quantitative and qualitative evaluations in order to show the quality of the results and the perspectives of this method

    Omnidirectional Stereo Vision for Autonomous Vehicles

    Get PDF
    Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications

    Proyecciones cónicas de rectas en sistemas catadióptricos para percepción visual en entornos construidos por el hombre

    Get PDF
    Los sistemas de visión omnidireccional son dispositivos que permiten la adquisición de imágenes con un campo de vista de 360º en un eje y superior 180º en el otro. La necesidad de integrar estas cámaras en sistemas de visión por computador ha impulsado la investigación en este campo profundizando en los modelos matemáticos y la base teórica necesaria que permite la implementación de aplicaciones. Existen diversas tecnologías para obtener imágenes omnidireccionales. Los sistemas catadióptricos son aquellos que consiguen aumentar el campo de vista utilizando espejos. Entre estos, encontramos los sistemas hiper-catadióptricos que son aquellos que utilizan una cámara perspectiva y un espejo hiperbólico. La geometría hiperbólica del espejo garantiza que el sistema sea central. En estos sistemas adquieren una especial relevancia las rectas del espacio, en la medida en que, rectas largas son completamente visibles en única imagen. La recta es una forma geométrica abundante en entornos construidos por el hombre que además acostumbra a ordenarse según direcciones dominantes. Salvo construcciones singulares, la fuerza de la gravedad fija una dirección vertical que puede utilizarse como referencia en el cálculo de la orientación del sistema. Sin embargo el uso de rectas en sistemas catadióptricos implica la dificultad añadida de trabajar con un modelo proyectivo no lineal en el que las rectas 3d son proyectadas en cónicas. Este TFM recoge el trabajo que se presenta en el artículo "Significant Conics on Catadioptric Images for 3D Orientation and Image Rectification" que pretendemos enviar a "Robotics and Autonomous Systems". En él se presenta un método para calcular la orientación de un sistema hiper-catadióptrico utilizando las cónicas que son proyecciones de rectas 3D. El método calcula la orientación respecto del sistema de referencia absoluto definido por el conjunto de puntos de fuga en un entorno en que existan direcciones dominantes

    Fast computational processing for mobile robots' self-localization

    Get PDF
    This paper intends to present a different approach to solve the Self-Localization problem regarding a RoboCup’s Middle Size League game, developed by MINHO team researchers. The method uses white field markings as key points, to compute the position with least error, creating an error-based graphic where the minimum corresponds to the real position, that are computed by comparing the key (line) points with a precomputed set of values for each position. This approach allows a very fast local and global localization calculation, allowing the global localization to be used more often, while driving the estimate to its real value. Differently from the majority of other teams in this league, it was important to come up with a new and improved method to solve the traditional slow Self-Localization problem.This work was developed at the Laboratório de Automação e Robótica by MINHO team´s researching and developing team, at University of Minho, under the supervision of Professor A. Fernando Ribeiro and A. Gil Lopes. The knowledge exchanging between the RoboCup’s MSL teams and community contributed greatly for the development of this work. This work has been supported by COMPETE: POCI-01- 0145-FEDER-007043 and FCT – Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/00319/2013.info:eu-repo/semantics/publishedVersio

    Metric and appearance based visual SLAM for mobile robots

    Get PDF
    Simultaneous Localization and Mapping (SLAM) maintains autonomy for mobile robots and it has been studied extensively during the last two decades. It is the process of building the map of an unknown environment and determining the location of the robot using this map concurrently. Different kinds of sensors such as Global Positioning System (GPS), Inertial Measurement Unit (IMU), laser range finder and sonar are used for data acquisition in SLAM. In recent years, passive visual sensors are utilized in visual SLAM (vSLAM) problem because of their increasing ubiquity. This thesis is concerned with the metric and appearance-based vSLAM problems for mobile robots. From the point of view of metric-based vSLAM, a performance improvement technique is developed. Template matching based video stabilization and Harris corner detector are integrated. Extracting Harris corner features from stabilized video consistently increases the accuracy of the localization. Data coming from a video camera and odometry are fused in an Extended Kalman Filter (EKF) to determine the pose of the robot and build the map of the environment. Simulation results validate the performance improvement obtained by the proposed technique. Moreover, a visual perception system is proposed for appearance-based vSLAM and used for under vehicle classification. The proposed system consists of three main parts: monitoring, detection and classification. In the first part a new catadioptric camera system, where a perspective camera points downwards to a convex mirror mounted to the body of a mobile robot, is designed. Thanks to the catadioptric mirror the scenes against the camera optical axis direction can be viewed. In the second part speeded up robust features (SURF) are used to detect the hidden objects that are under vehicles. Fast appearance based mapping algorithm (FAB-MAP) is then exploited for the classification of the means of transportations in the third part. Experimental results show the feasibility of the proposed system. The proposed solution is implemented using a non-holonomic mobile robot. In the implementations the bottom of the tables in the laboratory are considered as the under vehicles. A database that includes di erent under vehicle images is used. All the algorithms are implemented in Microsoft Visual C++ and OpenCV 2.4.4

    Omnidirectional Stereo Vision for Autonomous Vehicles

    Get PDF
    Environment perception with cameras is an important requirement for many applications for autonomous vehicles and robots. This work presents a stereoscopic omnidirectional camera system for autonomous vehicles which resolves the problem of a limited field of view and provides a 360° panoramic view of the environment. We present a new projection model for these cameras and show that the camera setup overcomes major drawbacks of traditional perspective cameras in many applications

    Spherical Image Processing for Accurate Visual Odometry with Omnidirectional Cameras

    Get PDF
    International audienceDue to their omnidirectional view, the use of catadioptric cameras is of great interest for robot localization and visual servoing. For simplicity, most vision-based algorithms use image processing tools (e.g. image smoothing) that were designed for perspective cameras. This can be a good approximation when the camera displacement is small with respect to the distance to the observed environment. Otherwise, perspective image processing tools are unable to accurately handle the signal distortion that is induced by the specific geometry of omnidirectional cameras. In this paper, we propose an appropriate spherical image processing for increasing the accuracy of visual odometry estimation. The omnidirectional images are mapped onto a unit sphere and treated in the spherical spectral domain. The spherical image processing take into account the specific geometry of omnidirectional cameras. For example we can design, a more accurate and more repeatable Harris interest point detector. The interest points can be matched between two images with a large baseline in order to accurately estimate the camera motion. We demonstrate with a real experiment the accuracy of the visual odometry obtained using the spherical image processing and the improvement with respect to the use of a standard perspective image processing

    Real-Time Multi-Fisheye Camera Self-Localization and Egomotion Estimation in Complex Indoor Environments

    Get PDF
    In this work a real-time capable multi-fisheye camera self-localization and egomotion estimation framework is developed. The thesis covers all aspects ranging from omnidirectional camera calibration to the development of a complete multi-fisheye camera SLAM system based on a generic multi-camera bundle adjustment method
    corecore