7 research outputs found

    Classification and Change Detection in Mobile Mapping LiDAR Point Clouds

    Get PDF
    Creating 3D models of the static environment is an important task for the advancement of driver assistance systems and autonomous driving. In this work, a static reference map is created from a Mobile Mapping “light detection and ranging” (LiDAR) dataset. The data was obtained in 14 measurement runs from March to October 2017 in Hannover and consists in total of about 15 billion points. The point cloud data are first segmented by region growing and then processed by a random forest classification, which divides the segments into the five static classes (“facade”, “pole”, “fence”, “traffic sign”, and “vegetation”) and three dynamic classes (“vehicle”, “bicycle”, “person”) with an overall accuracy of 94%. All static objects are entered into a voxel grid, to compare different measurement epochs directly. In the next step, the classified voxels are combined with the result of a visibility analysis. Therefore, we use a ray tracing algorithm to detect traversed voxels and differentiate between empty space and occlusion. Each voxel is classified as suitable for the static reference map or not by its object class and its occupation state during different epochs. Thereby, we avoid to eliminate static voxels which were occluded in some of the measurement runs (e.g. parts of a building occluded by a tree). However, segments that are only temporarily present and connected to static objects, such as scaffolds or awnings on buildings, are not included in the reference map. Overall, the combination of the classification with the subsequent entry of the classes into a voxel grid provides good and useful results that can be updated by including new measurement data

    Incremental learning of EMG-based Control commands using Gaussian Processes

    Get PDF
    Myoelectric control is the process of controlling a prosthesis or an assistive robot by using electrical signals of the muscles. Pattern recognition in myoelectric control is a challenging field, since the underlying distribution of the signal is likely to change during the application. Covariate shifts, including changes of the arm position or different levels of muscular activation, often lead to significant instability of the control signal. This work tries to overcome These challenges by enhancing a myoelectric human machine interface through the use of the sparse Gaussian Process (sGP) approximation Variational Free Energy and by the introduction of a novel adaptive model based on an unsupervised incremental learning approach. The novel adaptive model integrates an interclass and intraclass distance to improve prediction stability under challenging conditions. Furthermore, it demonstrates the successful incorporation of incremental updates which is shown to lead to a significantly increased performance and higher stability of the predictions in an online user study

    Semantic Categorization of Outdoor Scenes with Uncertainty Estimates using Multi-Class Gaussian Process Classification

    No full text
    This paper presents a novel semantic categorization method for 3D point cloud data using supervised, multiclass Gaussian Process (GP) classification. In contrast to other approaches, and particularly Support Vector Machines, which probably are the most used method for this task to date, GPs have the major advantage of providing informative uncertainty estimates about the resulting class labels. As we show in experiments, these uncertainty estimates can either be used to improve the classification by neglecting uncertain class labels or - more importantly - they can serve as an indication of the under-representation of certain classes in the training data. This means that GP classifiers are much better suited in a lifelong learning framework, where not all classes are represented initially, but instead new training data arrives during the operation of the robot. © 2012 IEEE

    Semantic Categorization of Outdoor Scenes with Uncertainty Estimates using Multi-Class Gaussian Process Classification

    No full text
    This paper presents a novel semantic categorization method for 3D point cloud data using supervised, multiclass Gaussian Process (GP) classification. In contrast to other approaches, and particularly Support Vector Machines, which probably are the most used method for this task to date, GPs have the major advantage of providing informative uncertainty estimates about the resulting class labels. As we show in experiments, these uncertainty estimates can either be used to improve the classification by neglecting uncertain class labels or - more importantly - they can serve as an indication of the under-representation of certain classes in the training data. This means that GP classifiers are much better suited in a lifelong learning framework, where not all classes are represented initially, but instead new training data arrives during the operation of the robot. © 2012 IEEE

    Semantic categorization of outdoor scenes with uncertainty estimates using multi-class gaussian process classification

    No full text
    This paper presents a novel semantic categorization method for 3D point cloud data using supervised, multiclass Gaussian Process (GP) classification. In contrast to other approaches, and particularly Support Vector Machines, which probably are the most used method for this task to date, GPs have the major advantage of providing informative uncertainty estimates about the resulting class labels. As we show in experiments, these uncertainty estimates can either be used to improve the classification by neglecting uncertain class labels or - more importantly - they can serve as an indication of the under-representation of certain classes in the training data. This means that GP classifiers are much better suited in a lifelong learning framework, where not all classes are represented initially, but instead new training data arrives during the operation of the robot. © 2012 IEEE.ARL (Grant W911NF-08-2-0004)ONR (Grants N00014-09-1- 1051 and N00014-09-1-1031

    Semantic categorization of outdoor scenes with uncertainty estimates using multi-class gaussian process classification

    No full text

    Adaptive Shared Autonomy between Human and Robot to Assist Mobile Robot Teleoperation

    Get PDF
    Die Teleoperation vom mobilen Roboter wird in großem Umfang eingesetzt, wenn es für Mensch unpraktisch oder undurchführbar ist, anwesend zu sein, aber die Entscheidung von Mensch wird dennoch verlangt. Es ist für Mensch stressig und fehleranfällig wegen Zeitverzögerung und Abwesenheit des Situationsbewusstseins, ohne Unterstützung den Roboter zu steuern einerseits, andererseits kann der völlig autonome Roboter, trotz jüngsten Errungenschaften, noch keine Aufgabe basiert auf die aktuellen Modelle der Wahrnehmung und Steuerung unabhängig ausführen. Deswegen müssen beide der Mensch und der Roboter in der Regelschleife bleiben, um gleichzeitig Intelligenz zur Durchführung von Aufgaben beizutragen. Das bedeut, dass der Mensch die Autonomie mit dem Roboter während des Betriebes zusammenhaben sollte. Allerdings besteht die Herausforderung darin, die beiden Quellen der Intelligenz vom Mensch und dem Roboter am besten zu koordinieren, um eine sichere und effiziente Aufgabenausführung in der Fernbedienung zu gewährleisten. Daher wird in dieser Arbeit eine neuartige Strategie vorgeschlagen. Sie modelliert die Benutzerabsicht als eine kontextuelle Aufgabe, um eine Aktionsprimitive zu vervollständigen, und stellt dem Bediener eine angemessene Bewegungshilfe bei der Erkennung der Aufgabe zur Verfügung. Auf diese Weise bewältigt der Roboter intelligent mit den laufenden Aufgaben auf der Grundlage der kontextuellen Informationen, entlastet die Arbeitsbelastung des Bedieners und verbessert die Aufgabenleistung. Um diese Strategie umzusetzen und die Unsicherheiten bei der Erfassung und Verarbeitung von Umgebungsinformationen und Benutzereingaben (i.e. der Kontextinformationen) zu berücksichtigen, wird ein probabilistischer Rahmen von Shared Autonomy eingeführt, um die kontextuelle Aufgabe mit Unsicherheitsmessungen zu erkennen, die der Bediener mit dem Roboter durchführt, und dem Bediener die angemesse Unterstützung der Aufgabenausführung nach diesen Messungen anzubieten. Da die Weise, wie der Bediener eine Aufgabe ausführt, implizit ist, ist es nicht trivial, das Bewegungsmuster der Aufgabenausführung manuell zu modellieren, so dass eine Reihe von der datengesteuerten Ansätzen verwendet wird, um das Muster der verschiedenen Aufgabenausführungen von menschlichen Demonstrationen abzuleiten, sich an die Bedürfnisse des Bedieners in einer intuitiven Weise über lange Zeit anzupassen. Die Praxistauglichkeit und Skalierbarkeit der vorgeschlagenen Ansätze wird durch umfangreiche Experimente sowohl in der Simulation als auch auf dem realen Roboter demonstriert. Mit den vorgeschlagenen Ansätzen kann der Bediener aktiv und angemessen unterstützt werden, indem die Kognitionsfähigkeit und Autonomieflexibilität des Roboters zu erhöhen
    corecore