1,622 research outputs found

    LiDAR and Camera Detection Fusion in a Real Time Industrial Multi-Sensor Collision Avoidance System

    Full text link
    Collision avoidance is a critical task in many applications, such as ADAS (advanced driver-assistance systems), industrial automation and robotics. In an industrial automation setting, certain areas should be off limits to an automated vehicle for protection of people and high-valued assets. These areas can be quarantined by mapping (e.g., GPS) or via beacons that delineate a no-entry area. We propose a delineation method where the industrial vehicle utilizes a LiDAR {(Light Detection and Ranging)} and a single color camera to detect passive beacons and model-predictive control to stop the vehicle from entering a restricted space. The beacons are standard orange traffic cones with a highly reflective vertical pole attached. The LiDAR can readily detect these beacons, but suffers from false positives due to other reflective surfaces such as worker safety vests. Herein, we put forth a method for reducing false positive detection from the LiDAR by projecting the beacons in the camera imagery via a deep learning method and validating the detection using a neural network-learned projection from the camera to the LiDAR space. Experimental data collected at Mississippi State University's Center for Advanced Vehicular Systems (CAVS) shows the effectiveness of the proposed system in keeping the true detection while mitigating false positives.Comment: 34 page

    Safe navigation and human-robot interaction in assistant robotic applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Avionics sensor fusion for small size unmanned aircraft Sense-and-Avoid

    Get PDF
    Cooperative and non-cooperative Sense-and-Avoid (SAA) systems are key enablers for Unmanned Aircraft (UA) to routinely access non-segregated airspace. In this paper some state-of-the-art cooperative and non-cooperative sensor and system technologies are investigated for small size UA applications, and the associated multisensor data fusion techniques are discussed. Non-cooperative sensors including both passive and active Forward Looking Sensors (FLS) and cooperative systems including Traffic Collision Avoidance System (TCAS), Automatic Dependent Surveillance - Broadcast (ADS-B) system and/or Mode C transponders are part of the proposed SAA architecture. After introducing the SAA system processes, the key mathematical models for data fusion are presented. The Interacting Multiple Model (IMM) algorithm is used to estimate the state vector of the intruders and this is propagated to predict the future trajectories using a probabilistic model. Adopting these mathematical models, conflict detection and resolution strategies for both cooperative and un-cooperative intruders are identified. Additionally, a detailed error analysis is performed to determine the overall uncertainty volume in the airspace surrounding the intruder tracks. This is accomplished by considering both the navigation and the tracking errors affecting the measurements and translating them to unified range and bearing uncertainty descriptors, which apply both to cooperative and non-cooperative scenarios. Detailed simulation case studies are carried out to evaluate the performance of the proposed SAA approach on a representative host platform (AEROSONDE UA) and various intruder platforms, including large transport aircraft and other UA. Results show that the required safe separation distance is always maintained when the SAA process is performed from ranges in excess of 500 metres

    Customized Co-Simulation Environment for Autonomous Driving Algorithm Development and Evaluation

    Full text link
    Increasing the implemented SAE level of autonomy in road vehicles requires extensive simulations and verifications in a realistic simulation environment before proving ground and public road testing. The level of detail in the simulation environment helps ensure the safety of a real-world implementation and reduces algorithm development cost by allowing developers to complete most of the validation in the simulation environment. Considering sensors like camera, LIDAR, radar, and V2X used in autonomous vehicles, it is essential to create a simulation environment that can provide these sensor simulations as realistically as possible. While sensor simulations are of crucial importance for perception algorithm development, the simulation environment will be incomplete for the simulation of holistic AV operation without being complemented by a realistic vehicle dynamic model and traffic cosimulation. Therefore, this paper investigates existing simulation environments, identifies use case scenarios, and creates a cosimulation environment to satisfy the simulation requirements for autonomous driving function development using the Carla simulator based on the Unreal game engine for the environment, Sumo or Vissim for traffic co-simulation, Carsim or Matlab, Simulink for vehicle dynamics co-simulation and Autoware or the author or user routines for autonomous driving algorithm co-simulation. As a result of this work, a model-based vehicle dynamics simulation with realistic sensor simulation and traffic simulation is presented. A sensor fusion methodology is implemented in the created simulation environment as a use case scenario. The results of this work will be a valuable resource for researchers who need a comprehensive co-simulation environment to develop connected and autonomous driving algorithms

    A Study on Recent Developments and Issues with Obstacle Detection Systems for Automated Vehicles

    Get PDF
    This paper reviews current developments and discusses some critical issues with obstacle detection systems for automated vehicles. The concept of autonomous driving is the driver towards future mobility. Obstacle detection systems play a crucial role in implementing and deploying autonomous driving on our roads and city streets. The current review looks at technology and existing systems for obstacle detection. Specifically, we look at the performance of LIDAR, RADAR, vision cameras, ultrasonic sensors, and IR and review their capabilities and behaviour in a number of different situations: during daytime, at night, in extreme weather conditions, in urban areas, in the presence of smooths surfaces, in situations where emergency service vehicles need to be detected and recognised, and in situations where potholes need to be observed and measured. It is suggested that combining different technologies for obstacle detection gives a more accurate representation of the driving environment. In particular, when looking at technological solutions for obstacle detection in extreme weather conditions (rain, snow, fog), and in some specific situations in urban areas (shadows, reflections, potholes, insufficient illumination), although already quite advanced, the current developments appear to be not sophisticated enough to guarantee 100% precision and accuracy, hence further valiant effort is needed

    Robots learn to behave: improving human-robot collaboration in flexible manufacturing applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen
    • …
    corecore