1,851 research outputs found
A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles
This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed
Digital Cognitive Companions for Marine Vessels : On the Path Towards Autonomous Ships
As for the automotive industry, industry and academia are making extensive efforts to create autonomous ships. The solutions for this are very technology-intense. Many building blocks, often relying on AI technology, need to work together to create a complete system that is safe and reliable to use. Even when the ships are fully unmanned, humans are still foreseen to guide the ships when unknown situations arise. This will be done through teleoperation systems.In this thesis, methods are presented to enhance the capability of two building blocks that are important for autonomous ships; a positioning system, and a system for teleoperation.The positioning system has been constructed to not rely on the Global Positioning System (GPS), as this system can be jammed or spoofed. Instead, it uses Bayesian calculations to compare the bottom depth and magnetic field measurements with known sea charts and magnetic field maps, in order to estimate the position. State-of-the-art techniques for this method typically use high-resolution maps. The problem is that there are hardly any high-resolution terrain maps available in the world. Hence we present a method using standard sea-charts. We compensate for the lower accuracy by using other domains, such as magnetic field intensity and bearings to landmarks. Using data from a field trial, we showed that the fusion method using multiple domains was more robust than using only one domain. In the second building block, we first investigated how 3D and VR approaches could support the remote operation of unmanned ships with a data connection with low throughput, by comparing respective graphical user interfaces (GUI) with a Baseline GUI following the currently applied interfaces in such contexts. Our findings show that both the 3D and VR approaches outperform the traditional approach significantly. We found the 3D GUI and VR GUI users to be better at reacting to potentially dangerous situations than the Baseline GUI users, and they could keep track of the surroundings more accurately. Building from this, we conducted a teleoperation user study using real-world data from a field-trial in the archipelago, where the users should assist the positioning system with bearings to landmarks. The users experienced the tool to give a good overview, and despite the connection with the low throughput, they managed through the GUI to significantly improve the positioning accuracy
WaterScenes: A Multi-Task 4D Radar-Camera Fusion Dataset and Benchmark for Autonomous Driving on Water Surfaces
Autonomous driving on water surfaces plays an essential role in executing
hazardous and time-consuming missions, such as maritime surveillance, survivors
rescue, environmental monitoring, hydrography mapping and waste cleaning. This
work presents WaterScenes, the first multi-task 4D radar-camera fusion dataset
for autonomous driving on water surfaces. Equipped with a 4D radar and a
monocular camera, our Unmanned Surface Vehicle (USV) proffers all-weather
solutions for discerning object-related information, including color, shape,
texture, range, velocity, azimuth, and elevation. Focusing on typical static
and dynamic objects on water surfaces, we label the camera images and radar
point clouds at pixel-level and point-level, respectively. In addition to basic
perception tasks, such as object detection, instance segmentation and semantic
segmentation, we also provide annotations for free-space segmentation and
waterline segmentation. Leveraging the multi-task and multi-modal data, we
conduct numerous experiments on the single modality of radar and camera, as
well as the fused modalities. Results demonstrate that 4D radar-camera fusion
can considerably enhance the robustness of perception on water surfaces,
especially in adverse lighting and weather conditions. WaterScenes dataset is
public on https://waterscenes.github.io
Concept for an Automatic Annotation of Automotive Radar Data Using AI-segmented Aerial Camera Images
This paper presents an approach to automatically annotate automotive radar
data with AI-segmented aerial camera images. For this, the images of an
unmanned aerial vehicle (UAV) above a radar vehicle are panoptically segmented
and mapped in the ground plane onto the radar images. The detected instances
and segments in the camera image can then be applied directly as labels for the
radar data. Owing to the advantageous bird's eye position, the UAV camera does
not suffer from optical occlusion and is capable of creating annotations within
the complete field of view of the radar. The effectiveness and scalability are
demonstrated in measurements, where 589 pedestrians in the radar data were
automatically labeled within 2 minutes.Comment: 6 pages, 5 figures, accepted at IEEE International Radar Conference
2023 to the Special Session "Automotive Radar
A Review of Automatic Classification of Drones Using Radar:Key Considerations, Performance Evaluation and Prospects
Automatic target classification or recognition is a critical capability in non-cooperative surveillance with radar in several defence and civilian applications. It is a well-established research field and numerous techniques exist for recognising targets, including miniature unmanned air systems or drones (i.e., small, mini, micro and nano platforms), from their radar signatures. These algorithms have notably benefited from advances in machine learning (e.g., deep neural networks) and are increasingly able to achieve remarkably high accuracies. Such classification results are often captured by standard, generic, object recognition metrics and originate from testing on simulated or real radar measurements of drones under high signal to noise ratios. Hence, it is difficult to assess and benchmark the performance of different classifiers under realistic operational conditions. In this paper, we first review the key challenges and considerations associated with the automatic classification of miniature drones from radar data. We then present a set of important performance measures, from an end-user perspective. These are relevant to typical drone surveillance system requirements and constraints. Selected examples from real radar observations are shown for illustration. We also outline here various emerging approaches and future directions that can produce more robust drone classifiers for radar
Team MIT Urban Challenge Technical Report
This technical report describes Team MITs approach to theDARPA Urban Challenge. We have developed a novel strategy forusing many inexpensive sensors, mounted on the vehicle periphery,and calibrated with a new cross-modal calibrationtechnique. Lidar, camera, and radar data streams are processedusing an innovative, locally smooth state representation thatprovides robust perception for real time autonomous control. Aresilient planning and control architecture has been developedfor driving in traffic, comprised of an innovative combination ofwellproven algorithms for mission planning, situationalplanning, situational interpretation, and trajectory control. These innovations are being incorporated in two new roboticvehicles equipped for autonomous driving in urban environments,with extensive testing on a DARPA site visit course. Experimentalresults demonstrate all basic navigation and some basic trafficbehaviors, including unoccupied autonomous driving, lanefollowing using pure-pursuit control and our local frameperception strategy, obstacle avoidance using kino-dynamic RRTpath planning, U-turns, and precedence evaluation amongst othercars at intersections using our situational interpreter. We areworking to extend these approaches to advanced navigation andtraffic scenarios
- …