27,943 research outputs found

    A Generalized Bayesian Approach for Localizing Static Natural Obstacles on Unpaved Roads

    Get PDF
    This paper presents an approach that implements sensor fusion and recursive Bayesian estimation (RBE) to improve a vehicle\u27s ability to perform obstacle detection and localization in unpaved road environments. The proposed approach utilizes RADAR, LiDAR and stereovision fully for sensor fusion to detect and localize static natural obstacles. Each sensor is characterized by a probabilistic sensor model which quantifies level of confidence (LOC) and probability of detection (POD) associatively. Deploying these sensor models enables the fusion of heterogeneous sensors without extensive formulations and with the incorporation of each sensor\u27s strengths. An Extended Kalman filter (EKF) is formulated and implemented for robust and computationally efficient RBE of obstacles\u27 locations while a sensor-equipped vehicle moves and observes them. Results with a test vehicle show the successful detection and localization of a static natural object on an unpaved road has demonstrated the effectiveness of the proposed approach

    Multi-Sensor Fusion for 3D Object Detection

    Get PDF
    Sensing and modelling of the surrounding environment is crucial for solving many of the problems in intelligent machines like self-driving cars, autonomous robots, and augmented reality displays. Performance, reliability and safety of the autonomous agents rely heavily on the way the environment is modelled. Two-dimensional models are inadequate to capture the three-dimensional nature of real-world scenes. Three-dimensional models are necessary to achieve the standards required by the autonomy stack for intelligent agents to work alongside humans. Data driven deep learning methodologies for three-dimensional scene modelling has evolved greatly in the past few years because of the availability of huge amounts of data from variety of sensors in the form of well-designed datasets. 3D object detection and localization are two of the key requirements for tasks such as obstacle avoidance, agent-to-agent interaction, and path planning. Most methodologies for object detection work on a single sensor data like camera or LiDAR. Camera sensors provide feature rich scene data and LiDAR provides us 3D geometrical information. Advanced object detection and localization can be achieved by leveraging the information from both camera and LiDAR sensors. In order to effectively quantify the uncertainty of each sensor channel, an appropriate fusion strategy is needed to fuse the independently encoded point clouds from LiDAR with the RGB images from standard vision cameras. In this work, we introduce a fusion strategy and develop a multimodal pipeline which utilizes existing state-of-the-art deep learning based data encoders to produce robust 3D object detection and localization in real-time. The performance of the proposed fusion model is evaluated on the popular KITTI 3D benchmark dataset

    Deep multi-modal U-net fusion methodology of infrared and ultrasonic images for porosity detection in additive manufacturing

    Get PDF
    We developed a deep fusion methodology of non-destructive (NDT) in-situ infrared and ex- situ ultrasonic images for localization of porosity detection without compromising the integrity of printed components that aims to improve the Laser-based additive manufacturing (LBAM) process. A core challenge with LBAM is that lack of fusion between successive layers of printed metal can lead to porosity and abnormalities in the printed component. We developed a sensor fusion U-Net methodology that fills the gap in fusing in-situ thermal images with ex-situ ultrasonic images by employing a U-Net Convolutional Neural Network (CNN) for feature extraction and two-dimensional object localization. We modify the U-Net framework with the inception and LSTM block layers. We validate the models by comparing our single modality models and fusion models with ground truth X-ray computed tomography images. The inception U-Net fusion model localized porosity with the highest mean intersection over union score of 0.557

    People tracking by cooperative fusion of RADAR and camera sensors

    Get PDF
    Accurate 3D tracking of objects from monocular camera poses challenges due to the loss of depth during projection. Although ranging by RADAR has proven effective in highway environments, people tracking remains beyond the capability of single sensor systems. In this paper, we propose a cooperative RADAR-camera fusion method for people tracking on the ground plane. Using average person height, joint detection likelihood is calculated by back-projecting detections from the camera onto the RADAR Range-Azimuth data. Peaks in the joint likelihood, representing candidate targets, are fed into a Particle Filter tracker. Depending on the association outcome, particles are updated using the associated detections (Tracking by Detection), or by sampling the raw likelihood itself (Tracking Before Detection). Utilizing the raw likelihood data has the advantage that lost targets are continuously tracked even if the camera or RADAR signal is below the detection threshold. We show that in single target, uncluttered environments, the proposed method entirely outperforms camera-only tracking. Experiments in a real-world urban environment also confirm that the cooperative fusion tracker produces significantly better estimates, even in difficult and ambiguous situations

    Artificial Intelligence and Systems Theory: Applied to Cooperative Robots

    Full text link
    This paper describes an approach to the design of a population of cooperative robots based on concepts borrowed from Systems Theory and Artificial Intelligence. The research has been developed under the SocRob project, carried out by the Intelligent Systems Laboratory at the Institute for Systems and Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the project stands both for "Society of Robots" and "Soccer Robots", the case study where we are testing our population of robots. Designing soccer robots is a very challenging problem, where the robots must act not only to shoot a ball towards the goal, but also to detect and avoid static (walls, stopped robots) and dynamic (moving robots) obstacles. Furthermore, they must cooperate to defeat an opposing team. Our past and current research in soccer robotics includes cooperative sensor fusion for world modeling, object recognition and tracking, robot navigation, multi-robot distributed task planning and coordination, including cooperative reinforcement learning in cooperative and adversarial environments, and behavior-based architectures for real time task execution of cooperating robot teams
    • …
    corecore