34 research outputs found

    Hardware-accelerated image features with subpixel accuracy for SLAM localization and object detection

    Get PDF
    Die Navigation von autonomen Systemen wird durch den Fortschritt der Technik und durch die steigenden Anforderungen der Anwendungen immer komplexer. Eines der wichtigsten offenen Probleme ist die Genauigkeit und die Robustheit der merkmalsbasierten SLAM-Lokalisierung für Anwendungen im dreidimensionalen Raum. In dieser Arbeit werden Methoden zur Optimierung der Merkmalserkennung mit Subpixel-genauer Bestimmung der Merkmalsposition für merkmalsbasiserte 6-DoF SLAM Verfahren untersucht. Zusätzlich wird eine Erweiterung des Merkmalsdeskriptors mit Farbinformationen und einer Subpixel-genauen Rotation des Deskriptor-Patterns betrachtet. Aus den Ergebnissen der Untersuchung wird das Subpixel-accurate Oriented AGAST and Rotated BRIEF (SOARB) Verfahren zur Merkmalserkennung entwickelt, dass trotz der effizienten und Ressourcen-optimierten Implementierung eine Verbesserung der Lokalisierung und Kartenerstellung in Relation zu anderen vergleichbaren Verfahren erreicht. Durch den Einsatz eines PCIe FPGA-Beschleunigers und der Xilinx SDAccel HW-SW-Codesign Umgebung mit OpenCL Unterstützung wird eine FPGA-basierte Version des SOARB Algorithmus zur Anbindung an SLAM-Systeme gezeigt. Die FPGA-Implementierung des SOARB-Verfahrens erreicht dabei Bildraten von 41 Bildern/s. Sie ist damit um Faktor 2,6x schneller als die schnellste getestete GPU-basierte Implementierung der OpenCV-Bibliothek mit Sub-pixel-genauer Bestimmung der Merkmalsposition. Durch eine geringe Leistungsaufnahme von 13,7W der FPGA-Komponente kann die Leistungseffizienz (Bilder/s pro Watt) des Gesamtsystems im Vergleich zu einer ebenfalls erstellten SOARB GPU-Referenzimplementierung um den Faktor 1,28x gesteigert werden. Der SOARB-Algorithmus wird zur Evaluation in das RTAB-Map SLAM System integriert und erreicht in Tests mit Bildaufnahme-Sequenzen aus dem Straßenverkehr eine Verbesserung des Translations- und Rotationsfehlers von durchschnittlich 22% und 19% im Vergleich zu dem häufig genutzten ORB-Verfahren. Die maximale Verbesserung des Root Mean Square Errors (RMSE) liegt bei 50% für die Translation und 40% für die Rotation. Durch einen Deskriptor mit Farbinformationen kann das SOARB-RGB Verfahren in der Evaluation mit dem Oxford Datensatz zur Bewertung von affinen kovarianten Merkmalen ein sehr gutes Inlier-Verhältnis von 99,2% über die ersten drei Bildvergleiche aller Datensätze erzielen.The navigation of autonomous systems is becoming more and more complex due to advances in technology and the increasing demands of applications. One of the most critical open issues is the accuracy and robustness of feature-based SLAM localization for three-dimensional SLAM applications. In this work the optimization of feature detection with subpixel-accurate features points for feature-based 6-DoF SLAM methods is investigated. In addition, an extension of the feature descriptor with color information and sub-pixel accurate rotation of the descriptor pattern is evaluated. This work develops a Subpixel-accurate Oriented AGAST and Rotated BRIEF (SOARB) feature extraction that, despite the efficient and resource-optimized implementation, improves localization and mapping compared to other comparable algorithms. Using a PCIe FPGA accelerator and the Xilinx SDAccel HW-SW Codesign environment with OpenCL support an FPGA-based version of the SOARB algorithm for interfacing to SLAM systems is demonstrated. The hardware implementation uses high-throughput pipeline processing and parallel units for computation. For faster processing, the subpixel interpolation and a bilinear interpolation is performed in fixed-point arithmetic and the angle calculation is implemented using a CORDIC method. The FPGA implementation of the SOARB algorithm achieves frame rates of 41 frames/s. Thus, it is a factor of 2.6 times faster than the fastest of the tested GPU-based OpenCV implementation with subpixel-accurate feature positions. With a low power consumption of 13.7W of the FPGA component, the overall system power efficiency (fps per watt) can be increased by a factor of 1.28x compared to an implemented SOARB-GPU reference implementation. For evaluation the SOARB algorithm is integrated into the RTAB Map SLAM system. It achieves an average of 22% and 19% improvement in translational and rotational errors compared to the commonly used ORB feature extraction in tests with dataset sequences for autonomous driving. The maximum improvement in root mean square error (RMSE) is 50% for translation and 40% for rotation. To analyze the impact of descriptor with color information, the SOARB-RGB method ist evaluated using the Oxford dataset for affine covariant features. The SOARB-RGB achieves a very good inlier-ratio of 99.2% over the first three dataset image of all datasets

    Embedded Vision Systems: A Review of the Literature

    Get PDF
    Over the past two decades, the use of low power Field Programmable Gate Arrays (FPGA) for the acceleration of various vision systems mainly on embedded devices have become widespread. The reconfigurable and parallel nature of the FPGA opens up new opportunities to speed-up computationally intensive vision and neural algorithms on embedded and portable devices. This paper presents a comprehensive review of embedded vision algorithms and applications over the past decade. The review will discuss vision based systems and approaches, and how they have been implemented on embedded devices. Topics covered include image acquisition, preprocessing, object detection and tracking, recognition as well as high-level classification. This is followed by an outline of the advantages and disadvantages of the various embedded implementations. Finally, an overview of the challenges in the field and future research trends are presented. This review is expected to serve as a tutorial and reference source for embedded computer vision systems

    Real-time Visual Flow Algorithms for Robotic Applications

    Get PDF
    Vision offers important sensor cues to modern robotic platforms. Applications such as control of aerial vehicles, visual servoing, simultaneous localization and mapping, navigation and more recently, learning, are examples where visual information is fundamental to accomplish tasks. However, the use of computer vision algorithms carries the computational cost of extracting useful information from the stream of raw pixel data. The most sophisticated algorithms use complex mathematical formulations leading typically to computationally expensive, and consequently, slow implementations. Even with modern computing resources, high-speed and high-resolution video feed can only be used for basic image processing operations. For a vision algorithm to be integrated on a robotic system, the output of the algorithm should be provided in real time, that is, at least at the same frequency as the control logic of the robot. With robotic vehicles becoming more dynamic and ubiquitous, this places higher requirements to the vision processing pipeline. This thesis addresses the problem of estimating dense visual flow information in real time. The contributions of this work are threefold. First, it introduces a new filtering algorithm for the estimation of dense optical flow at frame rates as fast as 800 Hz for 640x480 image resolution. The algorithm follows a update-prediction architecture to estimate dense optical flow fields incrementally over time. A fundamental component of the algorithm is the modeling of the spatio-temporal evolution of the optical flow field by means of partial differential equations. Numerical predictors can implement such PDEs to propagate current estimation of flow forward in time. Experimental validation of the algorithm is provided using high-speed ground truth image dataset as well as real-life video data at 300 Hz. The second contribution is a new type of visual flow named structure flow. Mathematically, structure flow is the three-dimensional scene flow scaled by the inverse depth at each pixel in the image. Intuitively, it is the complete velocity field associated with image motion, including both optical flow and scale-change or apparent divergence of the image. Analogously to optic flow, structure flow provides a robotic vehicle with perception of the motion of the environment as seen by the camera. However, structure flow encodes the full 3D image motion of the scene whereas optic flow only encodes the component on the image plane. An algorithm to estimate structure flow from image and depth measurements is proposed based on the same filtering idea used to estimate optical flow. The final contribution is the spherepix data structure for processing spherical images. This data structure is the numerical back-end used for the real-time implementation of the structure flow filter. It consists of a set of overlapping patches covering the surface of the sphere. Each individual patch approximately holds properties such as orthogonality and equidistance of points, thus allowing efficient implementations of low-level classical 2D convolution based image processing routines such as Gaussian filters and numerical derivatives. These algorithms are implemented on GPU hardware and can be integrated to future Robotic Embedded Vision systems to provide fast visual information to robotic vehicles

    SYSTEM-ON-A-CHIP (SOC)-BASED HARDWARE ACCELERATION FOR HUMAN ACTION RECOGNITION WITH CORE COMPONENTS

    Get PDF
    Today, the implementation of machine vision algorithms on embedded platforms or in portable systems is growing rapidly due to the demand for machine vision in daily human life. Among the applications of machine vision, human action and activity recognition has become an active research area, and market demand for providing integrated smart security systems is growing rapidly. Among the available approaches, embedded vision is in the top tier; however, current embedded platforms may not be able to fully exploit the potential performance of machine vision algorithms, especially in terms of low power consumption. Complex algorithms can impose immense computation and communication demands, especially action recognition algorithms, which require various stages of preprocessing, processing and machine learning blocks that need to operate concurrently. The market demands embedded platforms that operate with a power consumption of only a few watts. Attempts have been mad to improve the performance of traditional embedded approaches by adding more powerful processors; this solution may solve the computation problem but increases the power consumption. System-on-a-chip eld-programmable gate arrays (SoC-FPGAs) have emerged as a major architecture approach for improving power eciency while increasing computational performance. In a SoC-FPGA, an embedded processor and an FPGA serving as an accelerator are fabricated in the same die to simultaneously improve power consumption and performance. Still, current SoC-FPGA-based vision implementations either shy away from supporting complex and adaptive vision algorithms or operate at very limited resolutions due to the immense communication and computation demands. The aim of this research is to develop a SoC-based hardware acceleration workflow for the realization of advanced vision algorithms. Hardware acceleration can improve performance for highly complex mathematical calculations or repeated functions. The performance of a SoC system can thus be improved by using hardware acceleration method to accelerate the element that incurs the highest performance overhead. The outcome of this research could be used for the implementation of various vision algorithms, such as face recognition, object detection or object tracking, on embedded platforms. The contributions of SoC-based hardware acceleration for hardware-software codesign platforms include the following: (1) development of frameworks for complex human action recognition in both 2D and 3D; (2) realization of a framework with four main implemented IPs, namely, foreground and background subtraction (foreground probability), human detection, 2D/3D point-of-interest detection and feature extraction, and OS-ELM as a machine learning algorithm for action identication; (3) use of an FPGA-based hardware acceleration method to resolve system bottlenecks and improve system performance; and (4) measurement and analysis of system specications, such as the acceleration factor, power consumption, and resource utilization. Experimental results show that the proposed SoC-based hardware acceleration approach provides better performance in terms of the acceleration factor, resource utilization and power consumption among all recent works. In addition, a comparison of the accuracy of the framework that runs on the proposed embedded platform (SoCFPGA) with the accuracy of other PC-based frameworks shows that the proposed approach outperforms most other approaches

    Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems

    Get PDF
    The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system

    Towards a Common Software/Hardware Methodology for Future Advanced Driver Assistance Systems

    Get PDF
    The European research project DESERVE (DEvelopment platform for Safe and Efficient dRiVE, 2012-2015) had the aim of designing and developing a platform tool to cope with the continuously increasing complexity and the simultaneous need to reduce cost for future embedded Advanced Driver Assistance Systems (ADAS). For this purpose, the DESERVE platform profits from cross-domain software reuse, standardization of automotive software component interfaces, and easy but safety-compliant integration of heterogeneous modules. This enables the development of a new generation of ADAS applications, which challengingly combine different functions, sensors, actuators, hardware platforms, and Human Machine Interfaces (HMI). This book presents the different results of the DESERVE project concerning the ADAS development platform, test case functions, and validation and evaluation of different approaches. The reader is invited to substantiate the content of this book with the deliverables published during the DESERVE project. Technical topics discussed in this book include:Modern ADAS development platforms;Design space exploration;Driving modelling;Video-based and Radar-based ADAS functions;HMI for ADAS;Vehicle-hardware-in-the-loop validation system

    Recognition of objects to grasp and Neuro-Prosthesis control

    Get PDF
    corecore