158 research outputs found

    Visual Analysis Algorithms for Embedded Systems

    Get PDF
    Visual search systems are very popular applications, but on-line versions in 3G wireless environments suffer from network constraint like unstable or limited bandwidth that entail latency in query delivery, significantly degenerating the user’s experience. An alternative is to exploit the ability of the newest mobile devices to perform heterogeneous activities, like not only creating but also processing images. Visual feature extraction and compression can be performed on on-board Graphical Processing Units (GPUs), making smartphones capable of detecting a generic object (matching) in an exact way or of performing a classification activity. The latest trends in visual search have resulted in dedicated efforts in MPEG standardization, namely the MPEG CDVS (Compact Descriptor for Visual Search) standard. CDVS is an ISO/IEC standard used to extract a compressed descriptor. As regards to classification, in recent years neural networks have acquired an impressive importance and have been applied to several domains. This thesis focuses on the use of Deep Neural networks to classify images by means of Deep learning. Implementing visual search algorithms and deep learning-based classification on embedded environments is not a mere code-porting activity. Recent embedded devices are equipped with a powerful but limited number of resources, like development boards such as GPGPUs. GPU architectures fit particularly well, because they allow to execute more operations in parallel, following the SIMD (Single Instruction Multiple Data) paradigm. Nonetheless, it is necessary to make good design choices for the best use of available hardware and memory. For visual search, following the MPEG CDVS standard, the contribution of this thesis is an efficient feature computation phase, a parallel CDVS detector, completely implemented on embedded devices supporting the OpenCL framework. Algorithmic choices and implementation details to target the intrinsic characteristics of the selected embedded platforms are presented and discussed. Experimental results on several GPUs show that the GPU-based solution is up to 7× faster than the CPU-based one. This speed-up opens new visual search scenarios exploiting entire real-time on-board computations with no data transfer. As regards to the use of Deep convolutional neural networks for off-line image classification, their computational and memory requirements are huge, and this is an issue on embedded devices. Most of the complexity derives from the convolutional layers and in particular from the matrix multiplications they entail. The contribution of this thesis is a self-contained implementation to image classification providing common layers used in neural networks. The approach relies on a heterogeneous CPU-GPU scheme for performing convolutions in the transform domain. Experimental results show that the heterogeneous scheme described in this thesis boasts a 50× speedup over the CPU-only reference and outperforms a GPU-based reference by 2×, while slashing the power consumption by nearly 30%

    Accelerated Object Tracking with Local Binary Features

    Get PDF
    Multi-object tracking is a problem with wide application in modern computing. Object tracking is leveraged in areas such as human computer interaction, autonomous vehicle navigation, panorama generation, as well as countless other robotic applications. Several trackers have demonstrated favorable results for tracking of single objects. However, modern object trackers must make significant tradeoffs in order to accommodate multiple objects while maintaining real-time performance. These tradeoffs include sacrifices in robustness and accuracy that adversely affect the results. This thesis details the design and multiple implementations of an object tracker that is focused on computational efficiency. The computational efficiency of the tracker is achieved through use of local binary descriptors in a template matching approach. Candidate templates are matched to a dictionary composed of both static and dynamic templates to allow for variation in the appearance of the object while minimizing the potential for drift in the tracker. Locality constraints have been used to reduce tracking jitter. Due to the significant promise for parallelization, the tracking algorithm was implemented on the Graphics Processing Unit (GPU) using the CUDA API. The tracker\u27s efficiency also led to its implantation on a mobile platform as one of the mobile trackers that can accurately track at faster than realtime speed. Benchmarks were performed to compare the proposed tracker to state of the art trackers on a wide range of standard test videos. The tracker implemented in this work has demonstrated a higher degree of accuracy while operating several orders of magnitude faster

    Vision Pipelines and Optimizations

    Get PDF
    This chapter explores some hypothetical computer vision pipeline designs to understand HW/SW design alternatives and optimizations. Instead of looking at isolated computer vision algorithms, this chapter ties together many concepts into complete vision pipelines. Vision pipelines are sketched out for a few example applications to illustrate the use of different methods. Example applications include object recognition using shape and color for automobiles, face detection and emotion detection using local features, image classification using global features, and augmented reality. The examples have been chosen to illustrate the use of different families of feature description metrics within the Vision Metrics Taxonomy presented in Chap. 5. Alternative optimizations at each stage of the vision pipeline are explored. For example, we consider which vision algorithms run better on a CPU versus a GPU, and discuss how data transfer time between compute units and memory affects performance. Document type: Part of book or chapter of boo

    Real-Time GPS-Alternative Navigation Using Commodity Hardware

    Get PDF
    Modern navigation systems can use the Global Positioning System (GPS) to accurately determine position with precision in some cases bordering on millimeters. Unfortunately, GPS technology is susceptible to jamming, interception, and unavailability indoors or underground. There are several navigation techniques that can be used to navigate during times of GPS unavailability, but there are very few that result in GPS-level precision. One method of achieving high precision navigation without GPS is to fuse data obtained from multiple sensors. This thesis explores the fusion of imaging and inertial sensors and implements them in a real-time system that mimics human navigation. In addition, programmable graphics processing unit technology is leveraged to perform stream-based image processing using a computer\u27s video card. The resulting system can perform complex mathematical computations in a fraction of the time those same operations would take on a CPU-based platform. The resulting system is an adaptable, portable, inexpensive and self-contained software and hardware platform, which paves the way for advances in autonomous navigation, mobile cartography, and artificial intelligence

    Robot Navigation in Human Environments

    Get PDF
    For the near future, we envision service robots that will help us with everyday chores in home, office, and urban environments. These robots need to work in environments that were designed for humans and they have to collaborate with humans to fulfill their tasks. In this thesis, we propose new methods for communicating, transferring knowledge, and collaborating between humans and robots in four different navigation tasks. In the first application, we investigate how automated services for giving wayfinding directions can be improved to better address the needs of the human recipients. We propose a novel method based on inverse reinforcement learning that learns from a corpus of human-written route descriptions what amount and type of information a route description should contain. By imitating the human teachers' description style, our algorithm produces new route descriptions that sound similarly natural and convey similar information content, as we show in a user study. In the second application, we investigate how robots can leverage background information provided by humans for exploring an unknown environment more efficiently. We propose an algorithm for exploiting user-provided information such as sketches or floor plans by combining a global exploration strategy based on the solution of a traveling salesman problem with a local nearest-frontier-first exploration scheme. Our experiments show that the exploration tours are significantly shorter and that our system allows the user to effectively select the areas that the robot should explore. In the second part of this thesis, we focus on humanoid robots in home and office environments. The human-like body plan allows humanoid robots to navigate in environments and operate tools that were designed for humans, making humanoid robots suitable for a wide range of applications. As localization and mapping are prerequisites for all navigation tasks, we first introduce a novel feature descriptor for RGB-D sensor data and integrate this building block into an appearance-based simultaneous localization and mapping system that we adapt and optimize for the usage on humanoid robots. Our optimized system is able to track a real Nao humanoid robot more accurately and more robustly than existing approaches. As the third application, we investigate how humanoid robots can cover known environments efficiently with their camera, for example for inspection or search tasks. We extend an existing next-best-view approach by integrating inverse reachability maps, allowing us to efficiently sample and check collision-free full-body poses. Our approach enables the robot to inspect as much of the environment as possible. In our fourth application, we extend the coverage scenario to environments that also include articulated objects that the robot has to actively manipulate to uncover obstructed regions. We introduce algorithms for navigation subtasks that run highly parallelized on graphics processing units for embedded devices. Together with a novel heuristic for estimating utility maps, our system allows to find high-utility camera poses for efficiently covering environments with articulated objects. All techniques presented in this thesis were implemented in software and thoroughly evaluated in user studies, simulations, and experiments in both artificial and real-world environments. Our approaches advance the state of the art towards universally usable robots in everyday environments.Roboternavigation in menschlichen Umgebungen In naher Zukunft erwarten wir Serviceroboter, die uns im Haushalt, im Büro und in der Stadt alltägliche Arbeiten abnehmen. Diese Roboter müssen in für Menschen gebauten Umgebungen zurechtkommen und sie müssen mit Menschen zusammenarbeiten um ihre Aufgaben zu erledigen. In dieser Arbeit schlagen wir neue Methoden für die Kommunikation, Wissenstransfer und Zusammenarbeit zwischen Menschen und Robotern bei Navigationsaufgaben in vier Anwendungen vor. In der ersten Anwendung untersuchen wir, wie automatisierte Dienste zur Generierung von Wegbeschreibungen verbessert werden können, um die Beschreibungen besser an die Bedürfnisse der Empfänger anzupassen. Wir schlagen eine neue Methode vor, die inverses bestärkendes Lernen nutzt, um aus einem Korpus von von Menschen geschriebenen Wegbeschreibungen zu lernen, wie viel und welche Art von Information eine Wegbeschreibung enthalten sollte. Indem unser Algorithmus den Stil der Wegbeschreibungen der menschlichen Lehrer imitiert, kann der Algorithmus neue Wegbeschreibungen erzeugen, die sich ähnlich natürlich anhören und einen ähnlichen Informationsgehalt vermitteln, was wir in einer Benutzerstudie zeigen. In der zweiten Anwendung untersuchen wir, wie Roboter von Menschen bereitgestellte Hintergrundinformationen nutzen können, um eine bisher unbekannte Umgebung schneller zu erkunden. Wir schlagen einen Algorithmus vor, der Hintergrundinformationen wie Gebäudegrundrisse oder Skizzen nutzt, indem er eine globale Explorationsstrategie basierend auf der Lösung eines Problems des Handlungsreisenden kombiniert mit einer lokalen Explorationsstrategie. Unsere Experimente zeigen, dass die Erkundungstouren signifikant kürzer werden und dass der Benutzer mit unserem System effektiv die zu erkundenden Regionen spezifizieren kann. Der zweite Teil dieser Arbeit konzentriert sich auf humanoide Roboter in Umgebungen zu Hause und im Büro. Der menschenähnliche Körperbau ermöglicht es humanoiden Robotern, in Umgebungen zu navigieren und Werkzeuge zu benutzen, die für Menschen gebaut wurden, wodurch humanoide Roboter für vielfältige Aufgaben einsetzbar sind. Da Lokalisierung und Kartierung Grundvoraussetzungen für alle Navigationsaufgaben sind, führen wir zunächst einen neuen Merkmalsdeskriptor für RGB-D-Sensordaten ein und integrieren diesen Baustein in ein erscheinungsbasiertes simultanes Lokalisierungs- und Kartierungsverfahren, das wir an die Besonderheiten von humanoiden Robotern anpassen und optimieren. Unser System kann die Position eines realen humanoiden Roboters genauer und robuster verfolgen, als es mit existierenden Ansätzen möglich ist. Als dritte Anwendung untersuchen wir, wie humanoide Roboter bekannte Umgebungen effizient mit ihrer Kamera abdecken können, beispielsweise zu Inspektionszwecken oder zum Suchen eines Gegenstands. Wir erweitern ein bestehendes Verfahren, das die nächstbeste Beobachtungsposition berechnet, durch inverse Erreichbarkeitskarten, wodurch wir kollisionsfreie Ganzkörperposen effizient generieren und prüfen können. Unser Ansatz ermöglicht es dem Roboter, so viel wie möglich von der Umgebung zu untersuchen. In unserer vierten Anwendung erweitern wir dieses Szenario um Umgebungen, die auch bewegbare Gegenstände enthalten, die der Roboter aktiv bewegen muss um verdeckte Regionen zu sehen. Wir führen Algorithmen für Teilprobleme ein, die hoch parallelisiert auf Grafikkarten von eingebetteten Systemen ausgeführt werden. Zusammen mit einer neuen Heuristik zur Schätzung von Nutzenkarten ermöglicht dies unserem System Beobachtungspunkte mit hohem Nutzen zu finden, um Umgebungen mit bewegbaren Objekten effizient zu inspizieren. Alle vorgestellten Techniken wurden in Software implementiert und sorgfältig evaluiert in Benutzerstudien, Simulationen und Experimenten in künstlichen und realen Umgebungen. Unsere Verfahren bringen den Stand der Forschung voran in Richtung universell einsetzbarer Roboter in alltäglichen Umgebungen
    corecore