1,410 research outputs found

    Lane and Road Marking Detection with a High Resolution Automotive Radar for Automated Driving

    Get PDF
    Die Automobilindustrie erlebt gerade einen beispiellosen Wandel, und die Fahrerassistenz und das automatisierte Fahren spielen dabei eine entscheidende Rolle. Automatisiertes Fahren System umfasst haupts\"achlich drei Schritte: Wahrnehmung und Modellierung der Umgebung, Fahrtrichtungsplanung, und Fahrzeugsteuerung. Mit einer guten Wahrnehmung und Modellierung der Umgebung kann ein Fahrzeug Funktionen wie intelligenter Tempomat, Notbremsassistent, Spurwechselassistent, usw. erfolgreich durchf\"uhren. F\"ur Fahrfunktionen, die die Fahrpuren erkennen m\"ussen, werden gegenw\"artig ausnahmslos Kamerasensoren eingesetzt. Bei wechselnden Lichtverh\"altnissen, unzureichender Beleuchtung oder bei Sichtbehinderungen z.B. durch Nebel k\"onnen Videokameras aber empfindlich gest\"ort werden. Um diese Nachteile auszugleichen, wird in dieser Doktorarbeit eine \glqq Radar\textendash taugliche\grqq{} Fahrbahnmakierungerkennung entwickelt, mit der das Fahrzeug die Fahrspuren bei allen Lichtverh\"altnissen erkennen kann. Dazu k\"onnen bereits im Fahrzeug verbaute Radare eingesetzt werden. Die heutigen Fahrbahnmarkierungen k\"onnen mit Kamerasensoren sehr gut erfasst werden. Wegen unzureichender R\"uckstreueigenschaften der existierenden Fahrbahnmarkierungen f\"ur Radarwellen werden diese vom Radar nicht erkannt. Um dies zu bewerkstelligen, werden in dieser Arbeit die R\"uckstreueigenschaften von verschiedenen Reflektortypen, sowohl durch Simulationen als auch mit praktischen Messungen, untersucht und ein Reflektortyp vorgeschlagen, der zur Verarbeitung in heutige Fahrbahnmakierungen oder sogar f\"ur direkten Verbau in der Fahrbahn geeignet ist. Ein weiterer Schwerpunkt dieser Doktorarbeit ist der Einsatz von K\"unstliche Intelligenz (KI), um die Fahrspuren auch mit Radar zu detektieren und zu klassifizieren. Die aufgenommenen Radardaten werden mittels semantischer Segmentierung analysiert und Fahrspurverl\"aufe sowie Freifl\"achenerkennung detektiert. Gleichzeitig wird das Potential von KI\textendash tauglichen Umgebungverstehen mit bildgebenden Radardaten aufgezeigt

    A New Wave in Robotics: Survey on Recent mmWave Radar Applications in Robotics

    Full text link
    We survey the current state of millimeterwave (mmWave) radar applications in robotics with a focus on unique capabilities, and discuss future opportunities based on the state of the art. Frequency Modulated Continuous Wave (FMCW) mmWave radars operating in the 76--81GHz range are an appealing alternative to lidars, cameras and other sensors operating in the near visual spectrum. Radar has been made more widely available in new packaging classes, more convenient for robotics and its longer wavelengths have the ability to bypass visual clutter such as fog, dust, and smoke. We begin by covering radar principles as they relate to robotics. We then review the relevant new research across a broad spectrum of robotics applications beginning with motion estimation, localization, and mapping. We then cover object detection and classification, and then close with an analysis of current datasets and calibration techniques that provide entry points into radar research.Comment: 19 Pages, 11 Figures, 2 Tables, TRO Submission pendin

    Lidar-based Obstacle Detection and Recognition for Autonomous Agricultural Vehicles

    Get PDF
    Today, agricultural vehicles are available that can drive autonomously and follow exact route plans more precisely than human operators. Combined with advancements in precision agriculture, autonomous agricultural robots can reduce manual labor, improve workflow, and optimize yield. However, as of today, human operators are still required for monitoring the environment and acting upon potential obstacles in front of the vehicle. To eliminate this need, safety must be ensured by accurate and reliable obstacle detection and avoidance systems.In this thesis, lidar-based obstacle detection and recognition in agricultural environments has been investigated. A rotating multi-beam lidar generating 3D point clouds was used for point-wise classification of agricultural scenes, while multi-modal fusion with cameras and radar was used to increase performance and robustness. Two research perception platforms were presented and used for data acquisition. The proposed methods were all evaluated on recorded datasets that represented a wide range of realistic agricultural environments and included both static and dynamic obstacles.For 3D point cloud classification, two methods were proposed for handling density variations during feature extraction. One method outperformed a frequently used generic 3D feature descriptor, whereas the other method showed promising preliminary results using deep learning on 2D range images. For multi-modal fusion, four methods were proposed for combining lidar with color camera, thermal camera, and radar. Gradual improvements in classification accuracy were seen, as spatial, temporal, and multi-modal relationships were introduced in the models. Finally, occupancy grid mapping was used to fuse and map detections globally, and runtime obstacle detection was applied on mapped detections along the vehicle path, thus simulating an actual traversal.The proposed methods serve as a first step towards full autonomy for agricultural vehicles. The study has thus shown that recent advancements in autonomous driving can be transferred to the agricultural domain, when accurate distinctions are made between obstacles and processable vegetation. Future research in the domain has further been facilitated with the release of the multi-modal obstacle dataset, FieldSAFE

    Multi-User Gesture Recognition with Radar Technology

    Get PDF
    The aim of this work is the development of a Radar system for consumer applications. It is capable of tracking multiple people in a room and offers a touchless human-machine interface for purposes that range from entertainment to hygiene

    Extended Object Tracking: Introduction, Overview and Applications

    Full text link
    This article provides an elaborate overview of current research in extended object tracking. We provide a clear definition of the extended object tracking problem and discuss its delimitation to other types of object tracking. Next, different aspects of extended object modelling are extensively discussed. Subsequently, we give a tutorial introduction to two basic and well used extended object tracking approaches - the random matrix approach and the Kalman filter-based approach for star-convex shapes. The next part treats the tracking of multiple extended objects and elaborates how the large number of feasible association hypotheses can be tackled using both Random Finite Set (RFS) and Non-RFS multi-object trackers. The article concludes with a summary of current applications, where four example applications involving camera, X-band radar, light detection and ranging (lidar), red-green-blue-depth (RGB-D) sensors are highlighted.Comment: 30 pages, 19 figure

    Deep learning based 3D object detection for automotive radar and camera fusion

    Get PDF
    La percepción en el dominio de los vehículos autónomos es una disciplina clave para lograr la automatización de los Sistemas Inteligentes de Transporte. Por ello, este Trabajo Fin de Máster tiene como objetivo el desarrollo de una técnica de fusión sensorial para RADAR y cámara que permita crear una representación del entorno enriquecida para la Detección de Objetos 3D mediante algoritmos Deep Learning. Para ello, se parte de la idea de PointPainting [1] y se adapta a un sensor en auge, el RADAR 3+1D, donde nube de puntos RADAR e información semántica de la cámara son agregadas para generar una representación enriquecida del entorno.Perception in the domain of autonomous vehicles is a key discipline to achieve the au tomation of Intelligent Transport Systems. Therefore, this Master Thesis aims to develop a sensor fusion technique for RADAR and camera to create an enriched representation of the environment for 3D Object Detection using Deep Learning algorithms. To this end, the idea of PointPainting [1] is used as a starting point and is adapted to a growing sensor, the 3+1D RADAR, in which the radar point cloud is aggregated with the semantic information from the camera.Máster Universitario en Ingeniería Industrial (M141

    Multi-User Gesture Recognition with Radar Technology

    Get PDF
    The aim of this work is the development of a Radar system for consumer applications. It is capable of tracking multiple people in a room and offers a touchless human-machine interface for purposes that range from entertainment to hygiene
    corecore