2 research outputs found

    Autonomous Vehicles and Machines Conference, at IS&T Electronic Imaging

    Get PDF
    The performance of autonomous agents in both commercial and consumer applications increases along with their situational awareness. Tasks such as obstacle avoidance, agent to agent interaction, and path planning are directly dependent upon their ability to convert sensor readings into scene understanding. Central to this is the ability to detect and recognize objects. Many object detection methodologies operate on a single modality such as vision or LiDAR. Camera-based object detection models benefit from an abundance of feature-rich information for classifying different types of objects. LiDAR-based object detection models use sparse point clouds, where each point contains accurate 3D position of object surfaces. Camera-based methods lack accurate object to lens distance measurements, while LiDAR-based methods lack dense feature-rich details. By utilizing information from both camera and LiDAR sensors, advanced object detection and identification is possible. In this work, we introduce a deep learning framework for fusing these modalities and produce a robust real-time 3D bounding box object detection network. We demonstrate qualitative and quantitative analysis of the proposed fusion model on the popular KITTI dataset

    Multi-Modal Sensor Selection with Genetic Algorithms

    No full text
    We develop Genetic Algorithms based method and the tool to select sensors, which provide the specified quality of data after fusion. In this paper, we concentrate on introducing multi-modal sensor fusion in the selection operation. To evaluate data quality, we consider the combination of diverse sensor\u27s accuracy and security metrics. We modify data quality evaluation calculus that incorporates these major metrics to include the possibility of multi-modal sensor fusion. To evaluate Genetic Algorithm feasibility in sensor selection, we compare it against the conventional brute force search approach. To implement our approach and facilitate its use in practice, we produce and release an Android application that automatically selects multi-modal sensors based on the specified sensor types and required data quality
    corecore