136 research outputs found

    LiDAR-based Semantic Labeling : Automotive 3D Scene Understanding

    Get PDF
    Mobile Roboter und autonome Fahrzeuge verwenden verschiedene Sensormodalitäten zur Erkennung und Interpretation ihrer Umgebung. Neben Kameras und RaDAR Sensoren repräsentieren LiDAR Sensoren eine zentrale Komponente für moderne Methoden der Umgebungswahrnehmung. Zusätzlich zu einer präzisen Distanzmessung dieser Sensoren, ist ein umfangreiches semantisches Szeneverständnis notwendig, um ein effizientes und sicheres Agieren autonomer Systeme zu ermöglichen. In dieser Arbeit wird das neu entwickelte LiLaNet, eine echtzeitfähige, neuronale Netzarchitektur zur semantischen, punktweisen Klassifikation von LiDAR Punktwolken, vorgestellt. Hierfür finden die Ansätze der 2D Bildverarbeitung Verwendung, indem die 3D LiDAR Punktwolke als 2D zylindrisches Bild dargestellt wird. Dadurch werden Ergebnisse moderner Ansätze zur LiDAR-basierten, punktweisen Klassifikation übertroffen, was an unterschiedlichen Datensätzen demonstriert wird. Zur Entwicklung von Ansätzen des maschinellen Lernens, wie sie in dieser Arbeit verwendet werden, spielen umfangreiche Datensätze eine elementare Rolle. Aus diesem Grund werden zwei Datensätze auf Basis von modernen LiDAR Sensoren erzeugt. Durch das in dieser Arbeit entwickelte automatische Verfahren zur Datensatzgenerierung auf Basis von mehreren Sensormodalitäten, speziell der Kamera und des LiDAR Sensors, werden Kosten und Zeit der typischerweise manuellen Datensatzgenerierung reduziert. Zusätzlich wird eine multimodale Datenkompression vorgestellt, welche ein Kompressionsverfahren der Stereokamera auf den LiDAR Sensor überträgt. Dies führt zu einer Reduktion der LiDAR Daten bei gleichzeitigem Erhalt der zugrundeliegenden semantischen und geometrischen Information. Daraus resultiert eine erhöhte Echtzeitfähigkeit nachgelagerter Algorithmen autonomer Systeme. Außerdem werden zwei Erweiterungen zum vorgestellten Verfahren der semantischen Klassifikation umrissen. Zum einen wird die Sensorabhängigkeit durch Einführung des PiLaNets, einer neuen 3D Netzarchitektur, reduziert indem die LiDAR Punktwolke im 3D kartesischen Raum belassen wird, um die eher sensorabhängige 2D zylindrische Projektion zu ersetzen. Zum anderen wird die Unsicherheit neuronaler Netze implizit modelliert, indem eine Klassenhierarchie in den Trainingsprozess integriert wird. Insgesamt stellt diese Arbeit neuartige, performante Ansätze des 3D LiDAR-basierten, semantischen Szeneverstehens vor, welche zu einer Verbesserung der Leistung, Zuverlässigkeit und Sicherheit zukünftiger mobile Roboter und autonomer Fahrzeuge beitragen

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor

    A Review of Landcover Classification with Very-High Resolution Remotely Sensed Optical Images—Analysis Unit, Model Scalability and Transferability

    Get PDF
    As an important application in remote sensing, landcover classification remains one of the most challenging tasks in very-high-resolution (VHR) image analysis. As the rapidly increasing number of Deep Learning (DL) based landcover methods and training strategies are claimed to be the state-of-the-art, the already fragmented technical landscape of landcover mapping methods has been further complicated. Although there exists a plethora of literature review work attempting to guide researchers in making an informed choice of landcover mapping methods, the articles either focus on the review of applications in a specific area or revolve around general deep learning models, which lack a systematic view of the ever advancing landcover mapping methods. In addition, issues related to training samples and model transferability have become more critical than ever in an era dominated by data-driven approaches, but these issues were addressed to a lesser extent in previous review articles regarding remote sensing classification. Therefore, in this paper, we present a systematic overview of existing methods by starting from learning methods and varying basic analysis units for landcover mapping tasks, to challenges and solutions on three aspects of scalability and transferability with a remote sensing classification focus including (1) sparsity and imbalance of data; (2) domain gaps across different geographical regions; and (3) multi-source and multi-view fusion. We discuss in detail each of these categorical methods and draw concluding remarks in these developments and recommend potential directions for the continued endeavor

    Proceedings of the 2020 Joint Workshop of Fraunhofer IOSB and Institute for Anthropomatics, Vision and Fusion Laboratory

    Get PDF
    In 2020 fand der jährliche Workshop des Faunhofer IOSB und the Lehrstuhls für interaktive Echtzeitsysteme statt. Vom 27. bis zum 31. Juli trugen die Doktorranden der beiden Institute über den Stand ihrer Forschung vor in Themen wie KI, maschinellen Lernen, computer vision, usage control, Metrologie vor. Die Ergebnisse dieser Vorträge sind in diesem Band als technische Berichte gesammelt

    A Survey of Computer Vision Methods for 2D Object Detection from Unmanned Aerial Vehicles

    Get PDF
    The spread of Unmanned Aerial Vehicles (UAVs) in the last decade revolutionized many applications fields. Most investigated research topics focus on increasing autonomy during operational campaigns, environmental monitoring, surveillance, maps, and labeling. To achieve such complex goals, a high-level module is exploited to build semantic knowledge leveraging the outputs of the low-level module that takes data acquired from multiple sensors and extracts information concerning what is sensed. All in all, the detection of the objects is undoubtedly the most important low-level task, and the most employed sensors to accomplish it are by far RGB cameras due to costs, dimensions, and the wide literature on RGB-based object detection. This survey presents recent advancements in 2D object detection for the case of UAVs, focusing on the differences, strategies, and trade-offs between the generic problem of object detection, and the adaptation of such solutions for operations of the UAV. Moreover, a new taxonomy that considers different heights intervals and driven by the methodological approaches introduced by the works in the state of the art instead of hardware, physical and/or technological constraints is proposed

    Internet of Robotic Things Intelligent Connectivity and Platforms

    Get PDF
    The Internet of Things (IoT) and Industrial IoT (IIoT) have developed rapidly in the past few years, as both the Internet and “things” have evolved significantly. “Things” now range from simple Radio Frequency Identification (RFID) devices to smart wireless sensors, intelligent wireless sensors and actuators, robotic things, and autonomous vehicles operating in consumer, business, and industrial environments. The emergence of “intelligent things” (static or mobile) in collaborative autonomous fleets requires new architectures, connectivity paradigms, trustworthiness frameworks, and platforms for the integration of applications across different business and industrial domains. These new applications accelerate the development of autonomous system design paradigms and the proliferation of the Internet of Robotic Things (IoRT). In IoRT, collaborative robotic things can communicate with other things, learn autonomously, interact safely with the environment, humans and other things, and gain qualities like self-maintenance, self-awareness, self-healing, and fail-operational behavior. IoRT applications can make use of the individual, collaborative, and collective intelligence of robotic things, as well as information from the infrastructure and operating context to plan, implement and accomplish tasks under different environmental conditions and uncertainties. The continuous, real-time interaction with the environment makes perception, location, communication, cognition, computation, connectivity, propulsion, and integration of federated IoRT and digital platforms important components of new-generation IoRT applications. This paper reviews the taxonomy of the IoRT, emphasizing the IoRT intelligent connectivity, architectures, interoperability, and trustworthiness framework, and surveys the technologies that enable the application of the IoRT across different domains to perform missions more efficiently, productively, and completely. The aim is to provide a novel perspective on the IoRT that involves communication among robotic things and humans and highlights the convergence of several technologies and interactions between different taxonomies used in the literature.publishedVersio
    • …
    corecore