4,595 research outputs found

    Traffic Scene Perception for Automated Driving with Top-View Grid Maps

    Get PDF
    Ein automatisiertes Fahrzeug muss sichere, sinnvolle und schnelle Entscheidungen auf Basis seiner Umgebung treffen. Dies benötigt ein genaues und recheneffizientes Modell der Verkehrsumgebung. Mit diesem Umfeldmodell sollen Messungen verschiedener Sensoren fusioniert, gefiltert und nachfolgenden Teilsysteme als kompakte, aber aussagekräftige Information bereitgestellt werden. Diese Arbeit befasst sich mit der Modellierung der Verkehrsszene auf Basis von Top-View Grid Maps. Im Vergleich zu anderen Umfeldmodellen ermöglichen sie eine frühe Fusion von Distanzmessungen aus verschiedenen Quellen mit geringem Rechenaufwand sowie eine explizite Modellierung von Freiraum. Nach der Vorstellung eines Verfahrens zur Bodenoberflächenschätzung, das die Grundlage der Top-View Modellierung darstellt, werden Methoden zur Belegungs- und Elevationskartierung für Grid Maps auf Basis von mehreren, verrauschten, teilweise widersprüchlichen oder fehlenden Distanzmessungen behandelt. Auf der resultierenden, sensorunabhängigen Repräsentation werden anschließend Modelle zur Detektion von Verkehrsteilnehmern sowie zur Schätzung von Szenenfluss, Odometrie und Tracking-Merkmalen untersucht. Untersuchungen auf öffentlich verfügbaren Datensätzen und einem Realfahrzeug zeigen, dass Top-View Grid Maps durch on-board LiDAR Sensorik geschätzt und verlässlich sicherheitskritische Umgebungsinformationen wie Beobachtbarkeit und Befahrbarkeit abgeleitet werden können. Schließlich werden Verkehrsteilnehmer als orientierte Bounding Boxen mit semantischen Klassen, Geschwindigkeiten und Tracking-Merkmalen aus einem gemeinsamen Modell zur Objektdetektion und Flussschätzung auf Basis der Top-View Grid Maps bestimmt

    Robust Grasp with Compliant Multi-Fingered Hand

    Get PDF
    As robots find more and more applications in unstructured environments, the need for grippers able to grasp and manipulate a large variety of objects has brought consistent attention to the use of multi-fingered hands. The hardware development and the control of these devices have become one of the most active research subjects in the field of grasping and dexterous manipulation. Despite a large number of publications on grasp planning, grasping frameworks that strongly depend on information collected by touching the object are getting attention only in recent years. The objective of this thesis focuses on the development of a controller for a robotic system composed of a 7-dof collaborative arm + a 16-dof torque-controlled multi-fingered hand to successfully and robustly grasp various objects. The robustness of the grasp is increased through active interaction between the object and the arm/hand robotic system. Algorithms that rely on the kinematic model of the arm/hand system and its compliance characteristics are proposed and tested on real grasping applications. The obtained results underline the importance of taking advantage of information from hand-object contacts, which is necessary to achieve human-like abilities in grasping tasks

    Architectures for embedded multimodal sensor data fusion systems in the robotics : and airport traffic suveillance ; domain

    Get PDF
    Smaller autonomous robots and embedded sensor data fusion systems often suffer from limited computational and hardware resources. Many ‘Real Time’ algorithms for multi modal sensor data fusion cannot be executed on such systems, at least not in real time and sometimes not at all, because of the computational and energy resources needed, resulting from the architecture of the computational hardware used in these systems. Alternative hardware architectures for generic tracking algorithms could provide a solution to overcome some of these limitations. For tracking and self localization sequential Bayesian filters, in particular particle filters, have been shown to be able to handle a range of tracking problems that could not be solved with other algorithms. But particle filters have some serious disadvantages when executed on serial computational architectures used in most systems. The potential increase in performance for particle filters is huge as many of the computational steps can be done concurrently. A generic hardware solution for particle filters can relieve the central processing unit from the computational load associated with the tracking task. The general topic of this research are hardware-software architectures for multi modal sensor data fusion in embedded systems in particular tracking, with the goal to develop a high performance computational architecture for embedded applications in robotics and airport traffic surveillance domain. The primary concern of the research is therefore: The integration of domain specific concept support into hardware architectures for low level multi modal sensor data fusion, in particular embedded systems for tracking with Bayesian filters; and a distributed hardware-software tracking systems for airport traffic surveillance and control systems. Runway Incursions are occurrences at an aerodrome involving the incorrect presence of an aircraft, vehicle, or person on the protected area of a surface designated for the landing and take-off of aircraft. The growing traffic volume kept runway incursions on the NTSB’s ‘Most Wanted’ list for safety improvements for over a decade. Recent incidents show that problem is still existent. Technological responses that have been deployed in significant numbers are ASDE-X and A-SMGCS. Although these technical responses are a significant improvement and reduce the frequency of runway incursions, some runway incursion scenarios are not optimally covered by these systems, detection of runway incursion events is not as fast as desired, and they are too expensive for all but the biggest airports. Local, short range sensors could be a solution to provide the necessary affordable surveillance accuracy for runway incursion prevention. In this context the following objectives shall be reached. 1) Show the feasibility of runway incursion prevention systems based on localized surveillance. 2) Develop a design for a local runway incursion alerting system. 3) Realize a prototype of the system design using the developed tracking hardware.Kleinere autonome Roboter und eingebettete Sensordatenfusionssysteme haben oft mit stark begrenzter Rechenkapazität und eingeschränkten Hardwareressourcen zu kämpfen. Viele Echtzeitalgorithmen für die Fusion von multimodalen Sensordaten können, bedingt durch den hohen Bedarf an Rechenkapazität und Energie, auf solchen Systemen überhaupt nicht ausgeführt werden, oder zu mindesten nicht in Echtzeit. Der hohe Bedarf an Energie und Rechenkapazität hat seine Ursache darin, dass die Architektur der ausführenden Hardware und der ausgeführte Algorithmus nicht aufeinander abgestimmt sind. Dies betrifft auch Algorithmen zu Spurverfolgung. Mit Hilfe von alternativen Hardwarearchitekturen für die generische Ausführung solcher Algorithmen könnten sich einige der typischerweise vorliegenden Einschränkungen überwinden lassen. Eine Reihe von Aufgaben, die sich mit anderen Spurverfolgungsalgorithmen nicht lösen lassen, lassen sich mit dem Teilchenfilter, einem Algorithmus aus der Familie der Bayesschen Filter lösen. Bei der Ausführung auf traditionellen Architekturen haben Teilchenfilter gegenüber anderen Algorithmen einen signifikanten Nachteil, allerdings ist hier ein großer Leistungszuwachs durch die nebenläufige Ausführung vieler Rechenschritte möglich. Eine generische Hardwarearchitektur für Teilchenfilter könnte deshalb die oben genannten Systeme stark entlasten. Das allgemeine Thema dieses Forschungsvorhabens sind Hardware-Software-Architekturen für die multimodale Sensordatenfusion auf eingebetteten Systemen - speziell für Aufgaben der Spurverfolgung, mit dem Ziel eine leistungsfähige Architektur für die Berechnung entsprechender Algorithmen auf eingebetteten Systemen zu entwickeln, die für Anwendungen in der Robotik und Verkehrsüberwachung auf Flughäfen geeignet ist. Das Augenmerk des Forschungsvorhabens liegt dabei auf der Integration von vom Einsatzgebiet abhängigen Konzepten in die Architektur von Systemen zur Spurverfolgung mit Bayeschen Filtern, sowie auf verteilten Hardware-Software Spurverfolgungssystemen zur Überwachung und Führung des Rollverkehrs auf Flughäfen. Eine „Runway Incursion“ (RI) ist ein Vorfall auf einem Flugplatz, bei dem ein Fahrzeug oder eine Person sich unerlaubt in einem Abschnitt der Start- bzw. Landebahn befindet, der einem Verkehrsteilnehmer zur Benutzung zugewiesen wurde. Der wachsende Flugverkehr hat dafür gesorgt, das RIs seit über einem Jahrzehnt auf der „Most Wanted“-Liste des NTSB für Verbesserungen der Sicherheit stehen. Jüngere Vorfälle zeigen, dass das Problem noch nicht behoben ist. Technologische Maßnahmen die in nennenswerter Zahl eingesetzt wurden sind das ASDE-X und das A-SMGCS. Obwohl diese Maßnahmen eine deutliche Verbesserung darstellen und die Zahl der RIs deutlich reduzieren, gibt es einige RISituationen die von diesen Systemen nicht optimal abgedeckt werden. Außerdem detektieren sie RIs ist nicht so schnell wie erwünscht und sind - außer für die größten Flughäfen - zu teuer. Lokale Sensoren mit kurzer Reichweite könnten eine Lösung sein um die für die zuverlässige Erkennung von RIs notwendige Präzision bei der Überwachung des Rollverkehrs zu erreichen. Vor diesem Hintergrund sollen die folgenden Ziele erreicht werden. 1) Die Machbarkeit eines Runway Incursion Vermeidungssystems, das auf lokalen Sensoren basiert, zeigen. 2) Einen umsetzbaren Entwurf für ein solches System entwickeln. 3) Einen Prototypen des Systems realisieren, das die oben gennannte Hardware zur Spurverfolgung einsetzt

    Multi-Object Tracking System based on LiDAR and RADAR for Intelligent Vehicles applications

    Get PDF
    El presente Trabajo Fin de Grado tiene como objetivo el desarrollo de un Sistema de Detección y Multi-Object Tracking 3D basado en la fusión sensorial de LiDAR y RADAR para aplicaciones de conducción autónoma basándose en algoritmos tradicionales de Machine Learning. La implementación realizada está basada en Python, ROS y cumple requerimientos de tiempo real. En la etapa de detección de objetos se utiliza el algoritmo de segmentación del plano RANSAC, para una posterior extracción de Bounding Boxes mediante DBSCAN. Una Late Sensor Fusion mediante Intersection over Union 3D y un sistema de tracking BEV-SORT completan la arquitectura propuesta.This Final Degree Project aims to develop a 3D Multi-Object Tracking and Detection System based on the Sensor Fusion of LiDAR and RADAR for autonomous driving applications based on traditional Machine Learning algorithms. The implementation is based on Python, ROS and complies with real-time requirements. In the Object Detection stage, the RANSAC plane segmentation algorithm is used, for a subsequent extraction of Bounding Boxes using DBSCAN. A Late Sensor Fusion using Intersection over Union 3D and a BEV-SORT tracking system complete the proposed architecture.Grado en Ingeniería en Electrónica y Automática Industria

    Multi-Objective Constraint Satisfaction for Mobile Robot Area Defense

    Get PDF
    In developing multi-robot cooperative systems, there are often competing objectives that need to be met. For example in automating area defense systems, multiple robots must work together to explore the entire area, and maintain consistent communications to alert the other agents and ensure trust in the system. This research presents an algorithm that tasks robots to meet the two specific goals of exploration and communication maintenance in an uncoordinated environment reducing the need for a user to pre-balance the objectives. This multi-objective problem is defined as a constraint satisfaction problem solved using the Non-dominated Sorting Genetic Algorithm II (NSGA-II). Both goals of exploration and communication maintenance are described as fitness functions in the algorithm that would satisfy their corresponding constraints. The exploration fitness was described in three ways to diversify the way exploration was measured, whereas the communication maintenance fitness was calculated as the number of independent clusters of agents. Applying the algorithm to the area defense problem, results show exploration and communication without coordination are two diametrically opposed goals, in which one may be favored, but only at the expense of the other. This work also presents suggestions for anyone looking to take further steps in developing a physically grounded solution to this area defense problem

    A Comprehensive Survey of Deep Learning in Remote Sensing: Theories, Tools and Challenges for the Community

    Full text link
    In recent years, deep learning (DL), a re-branding of neural networks (NNs), has risen to the top in numerous areas, namely computer vision (CV), speech recognition, natural language processing, etc. Whereas remote sensing (RS) possesses a number of unique challenges, primarily related to sensors and applications, inevitably RS draws from many of the same theories as CV; e.g., statistics, fusion, and machine learning, to name a few. This means that the RS community should be aware of, if not at the leading edge of, of advancements like DL. Herein, we provide the most comprehensive survey of state-of-the-art RS DL research. We also review recent new developments in the DL field that can be used in DL for RS. Namely, we focus on theories, tools and challenges for the RS community. Specifically, we focus on unsolved challenges and opportunities as it relates to (i) inadequate data sets, (ii) human-understandable solutions for modelling physical phenomena, (iii) Big Data, (iv) non-traditional heterogeneous data sources, (v) DL architectures and learning algorithms for spectral, spatial and temporal data, (vi) transfer learning, (vii) an improved theoretical understanding of DL systems, (viii) high barriers to entry, and (ix) training and optimizing the DL.Comment: 64 pages, 411 references. To appear in Journal of Applied Remote Sensin

    Bimanual robot skills: MP encoding, dimensionality reduction and reinforcement learning

    Get PDF
    In our culture, robots have been in novels and cinema for a long time, but it has been specially in the last two decades when the improvements in hardware - better computational power and components - and advances in Artificial Intelligence (AI), have allowed robots to start sharing spaces with humans. Such situations require, aside from ethical considerations, robots to be able to move with both compliance and precision, and learn at different levels, such as perception, planning, and motion, being the latter the focus of this work. The first issue addressed in this thesis is inverse kinematics for redundant robot manipulators, i.e: positioning the robot joints so as to reach a certain end-effector pose. We opt for iterative solutions based on the inversion of the kinematic Jacobian of a robot, and propose to filter and limit the gains in the spectral domain, while also unifying such approach with a continuous, multipriority scheme. Such inverse kinematics method is then used to derive manipulability in the whole workspace of an antropomorphic arm, and the coordination of two arms is subsequently optimized by finding their best relative positioning. Having solved the kinematic issues, a robot learning within a human environment needs to move compliantly, with limited amount of force, in order not to harm any humans or cause any damage, while being as precise as possible. Therefore, we developed two dynamic models for the same redundant arm we had analysed kinematically: The first based on local models with Gaussian projections, and the second characterizing the most problematic term of the dynamics, namely friction. Such models allowed us to implement feed-forward controllers, where we can actively change the weights in the compliance-precision tradeoff. Moreover, we used such models to predict external forces acting on the robot, without the use of force sensors. Afterwards, we noticed that bimanual robots must coordinate their components (or limbs) and be able to adapt to new situations with ease. Over the last decade, a number of successful applications for learning robot motion tasks have been published. However, due to the complexity of a complete system including all the required elements, most of these applications involve only simple robots with a large number of high-end technology sensors, or consist of very simple and controlled tasks. Using our previous framework for kinematics and control, we relied on two types of movement primitives to encapsulate robot motion. Such movement primitives are very suitable for using reinforcement learning. In particular, we used direct policy search, which uses the motion parametrization as the policy itself. In order to improve the learning speed in real robot applications, we generalized a policy search algorithm to give some importance to samples yielding a bad result, and we paid special attention to the dimensionality of the motion parametrization. We reduced such dimensionality with linear methods, using the rewards obtained through motion repetition and execution. We tested such framework in a bimanual task performed by two antropomorphic arms, such as the folding of garments, showing how a reduced dimensionality can provide qualitative information about robot couplings and help to speed up the learning of tasks when robot motion executions are costly.A la nostra cultura, els robots han estat presents en novel·les i cinema des de fa dècades, però ha sigut especialment en les últimes dues quan les millores en hardware (millors capacitats de còmput) i els avenços en intel·ligència artificial han permès que els robots comencin a compartir espais amb els humans. Aquestes situacions requereixen, a banda de consideracions ètiques, que els robots siguin capaços de moure's tant amb suavitat com amb precisió, i d'aprendre a diferents nivells, com són la percepció, planificació i moviment, essent l'última el centre d'atenció d'aquest treball. El primer problema adreçat en aquesta tesi és la cinemàtica inversa, i.e.: posicionar les articulacions del robot de manera que l'efector final estigui en una certa posició i orientació. Hem estudiat el camp de les solucions iteratives, basades en la inversió del Jacobià cinemàtic d'un robot, i proposem un filtre que limita els guanys en el seu domini espectral, mentre també unifiquem tal mètode dins un esquema multi-prioritat i continu. Aquest mètode per a la cinemàtica inversa és usat a l'hora d'encapsular tota la informació sobre l'espai de treball d'un braç antropomòrfic, i les capacitats de coordinació entre dos braços són optimitzades, tot trobant la seva millor posició relativa en l'espai. Havent resolt les dificultats cinemàtiques, un robot que aprèn en un entorn humà necessita moure's amb suavitat exercint unes forces limitades per tal de no causar danys, mentre es mou amb la màxima precisió possible. Per tant, hem desenvolupat dos models dinàmics per al mateix braç robòtic redundant que havíem analitzat des del punt de vista cinemàtic: El primer basat en models locals amb projeccions de Gaussianes i el segon, caracteritzant el terme més problemàtic i difícil de representar de la dinàmica, la fricció. Aquests models ens van permetre utilitzar controladors coneguts com "feed-forward", on podem canviar activament els guanys buscant l'equilibri precisió-suavitat que més convingui. A més, hem usat aquests models per a inferir les forces externes actuant en el robot, sense la necessitat de sensors de força. Més endavant, ens hem adonat que els robots bimanuals han de coordinar els seus components (braços) i ser capaços d'adaptar-se a noves situacions amb facilitat. Al llarg de l'última dècada, diverses aplicacions per aprendre tasques motores robòtiques amb èxit han estat publicades. No obstant, degut a la complexitat d'un sistema complet que inclogui tots els elements necessaris, la majoria d'aquestes aplicacions consisteixen en robots més aviat simples amb costosos sensors d'última generació, o a resoldre tasques senzilles en un entorn molt controlat. Utilitzant el nostre treball en cinemàtica i control, ens hem basat en dos tipus de primitives de moviment per caracteritzar la motricitat robòtica. Aquestes primitives de moviment són molt adequades per usar aprenentatge per reforç. En particular, hem usat la búsqueda directa de la política, un camp de l'aprenentatge per reforç que usa la parametrització del moviment com la pròpia política. Per tal de millorar la velocitat d'aprenentatge en aplicacions amb robots reals, hem generalitzat un algoritme de búsqueda directa de política per a donar importància a les mostres amb mal resultat, i hem donat especial atenció a la reducció de dimensionalitat en la parametrització dels moviments. Hem reduït la dimensionalitat amb mètodes lineals, utilitzant les recompenses obtingudes EN executar els moviments. Aquests mètodes han estat provats en tasques bimanuals com són plegar roba, usant dos braços antropomòrfics. Els resultats mostren com la reducció de dimensionalitat pot aportar informació qualitativa d'una tasca, i al mateix temps ajuda a aprendre-la més ràpid quan les execucions amb robots reals són costoses

    Deep probabilistic methods for improved radar sensor modelling and pose estimation

    Get PDF
    Radar’s ability to sense under adverse conditions and at far-range makes it a valuable alternative to vision and lidar for mobile robotic applications. However, its complex, scene-dependent sensing process and significant noise artefacts makes working with radar challenging. Moving past classical rule-based approaches, which have dominated the literature to date, this thesis investigates deep and data-driven solutions across a range of tasks in robotics. Firstly, a deep approach is developed for mapping raw sensor measurements to a grid-map of occupancy probabilities, outperforming classical filtering approaches by a significant margin. A distribution over the occupancy state is captured, additionally allowing uncertainty in predictions to be identified and managed. The approach is trained entirely using partial labels generated automatically from lidar, without requiring manual labelling. Next, a deep model is proposed for generating stochastic radar measurements from simulated elevation maps. The model is trained by learning the forward and backward processes side-by-side, using a combination of adversarial and cyclical consistency constraints in combination with a partial alignment loss, using labels generated in lidar. By faithfully replicating the radar sensing process, new models can be trained for down-stream tasks, using labels that are readily available in simulation. In this case, segmentation models trained on simulated radar measurements, when deployed in the real world, are shown to approach the performance of a model trained entirely on real-world measurements. Finally, the potential of deep approaches applied to the radar odometry task are explored. A learnt feature space is combined with a classical correlative scan matching procedure and optimised for pose prediction, allowing the proposed method to outperform the previous state-of-the-art by a significant margin. Through a probabilistic consideration the uncertainty in the pose is also successfully characterised. Building upon this success, properties of the Fourier Transform are then utilised to separate the search for translation and angle. It is shown that this decoupled search results in a significant boost to run-time performance, allowing the approach to run in real-time on CPUs and embedded devices, whilst remaining competitive with other radar odometry methods proposed in the literature
    corecore