589 research outputs found

    Software Porting of a 3D Reconstruction Algorithm to Razorcam Embedded System on Chip

    Get PDF
    A method is presented to calculate depth information for a UAV navigation system from Keypoints in two consecutive image frames using a monocular camera sensor as input and the OpenCV library. This method was first implemented in software and run on a general-purpose Intel CPU, then ported to the RazorCam Embedded Smart-Camera System and run on an ARM CPU onboard the Xilinx Zynq-7000. The results of performance and accuracy testing of the software implementation are then shown and analyzed, demonstrating a successful port of the software to the RazorCam embedded system on chip that could potentially be used onboard a UAV with tight constraints of size, weight, and power. The potential impacts will be seen through the continuation of this research in the Smart ES lab at University of Arkansas

    Motorcycles that see: Multifocal stereo vision sensor for advanced safety systems in tilting vehicles

    Get PDF
    Advanced driver assistance systems, ADAS, have shown the possibility to anticipate crash accidents and effectively assist road users in critical traffic situations. This is not the case for motorcyclists, in fact ADAS for motorcycles are still barely developed. Our aim was to study a camera-based sensor for the application of preventive safety in tilting vehicles. We identified two road conflict situations for which automotive remote sensors installed in a tilting vehicle are likely to fail in the identification of critical obstacles. Accordingly, we set two experiments conducted in real traffic conditions to test our stereo vision sensor. Our promising results support the application of this type of sensors for advanced motorcycle safety applications

    Position estimation using a stereo camera as part of the perception system in a Formula Student car

    Get PDF
    This thesis presents a part of the implementation of the perception system in an autonomous Formula Student vehicle. More precisely, it develops two different pipelines to process the data from the two main sensors of the vehicle: a LiDAR and a stereo camera. The first, a stereo camera system which is based on two monocular cameras, provides traffic cone position estimations based on the detections made by a convolutional neural network. These positions are obtained by using a self-designed stereo processing algorithm, based on 2D-3D position estimates and keypoint extraction and matching. The second is a sensor fusion system that first registers both sensors based on an extrinsic calibration system that has been implemented. Then, it exploits the neural network detection from the stereo system to project the LiDAR point cloud onto the image, obtaining a balance between accurate detection and position estimation. These two systems are evaluated, compared and integrated into "Xaloc". The Formula Student vehicle developed by the Driverless UPC team.Esta tesis presenta una parte de la implementación del sistema de percepción en un vehículo autónomo de Formula Student. Concretamente, se desarrollan dos sistemas diferentes para el procesado de datos de los dos sensores principales del vehículo: un LiDAR y una cámara estéreo. El sistema de cámara estéreo se basa en dos cámaras monoculares y proporciona estimaciones de la posición de los conos de tráfico que delimitan la pista en base a las detecciones realizadas por una red neuronal convolucional. Estas posiciones se obtienen mediante el uso de un algoritmo de procesamiento estéreo de diseño propio, basado en estimaciones de posición 2D-3D y en extracción y correspondencia de "keypoints". El segundo es un sistema de fusión de sensores que primero registra ambos sensores basándose en un sistema de calibración extrínseco que se ha implementado. Luego, usa la detección hecha con la red neuronal del sistema estéreo para proyectar la nube de puntos LiDAR en la imagen, obteniendo un lo mejor de cada sensor: una detección robusta y una estimación de posición muy precisa. Estos dos sistemas se evalúan, comparan e integran en "Xaloc" el vehículo sin conductor del equipo de Formula Student Driverless UPC.Aquesta tesi presenta una part de la implementació del sistema de percepció en un vehicle autònom de Formula Student. En concret, es desenvolupen dos sistemes diferents per processar les dades dels dos principals sensors del vehicle: un LiDAR i una càmera estèreo. El sistema de càmera estèreo es basa en dues càmeres monoculars, i proporciona estimacions de les posicions dels cons de trànsit que delimiten la pista basades en les deteccions fetes amb una xarxa neuronal convolucional. Aquestes posicions s'obtenen mitjançant un algoritme de processament d'estèreo propi, basat en estimacions de posició 2D-3D i en extracció i correspondència de keypoints. El segon és un sistema de fusió de sensors que registra els dos sensors en base a un sistema de calibratge extrínsec que s'ha implementat. A continuació, fa servir les deteccions de la xarxa neuronal del sistema estèreo per projectar el núvol de punts LiDAR a la imatge, obtenint un equilibri entre una bona detecció en imatge i la precisió del núvol de punts LiDAR. Aquests dos sistemes són avaluats, comparats i integrats al "Xaloc" el vehicle sense conductor de l'equip de Formula Student Driverless UPC

    LiDAR Enhanced Structure-from-Motion

    Full text link
    Although Structure-from-Motion (SfM) as a maturing technique has been widely used in many applications, state-of-the-art SfM algorithms are still not robust enough in certain situations. For example, images for inspection purposes are often taken in close distance to obtain detailed textures, which will result in less overlap between images and thus decrease the accuracy of estimated motion. In this paper, we propose a LiDAR-enhanced SfM pipeline that jointly processes data from a rotating LiDAR and a stereo camera pair to estimate sensor motions. We show that incorporating LiDAR helps to effectively reject falsely matched images and significantly improve the model consistency in large-scale environments. Experiments are conducted in different environments to test the performance of the proposed pipeline and comparison results with the state-of-the-art SfM algorithms are reported.Comment: 6 pages plus reference. Work has been submitted to ICRA 202
    corecore