6 research outputs found

    A depth-based hybrid approach for safe flight corridor generation in memoryless planning

    Get PDF
    This paper presents a depth-based hybrid method to generate safe flight corridors for a memoryless local navigation planner. It is first proposed to use raw depth images as inputs in the learning-based object-detection engine with no requirement for map fusion. We then employ an object-detection network to directly predict the base of polyhedral safe corridors in a new raw depth image. Furthermore, we apply a verification procedure to eliminate any false predictions so that the resulting collision-free corridors are guaranteed. More importantly, the proposed mechanism helps produce separate safe corridors with minimal overlap that are suitable to be used as space boundaries for path planning. The average intersection of union (IoU) of corridors obtained by the proposed algorithm is less than 2%. To evaluate the effectiveness of our method, we incorporated it into a memoryless planner with a straight-line path-planning algorithm. We then tested the entire system in both synthetic and real-world obstacle-dense environments. The obtained results with very high success rates demonstrate that the proposed approach is highly capable of producing safe corridors for memoryless local planning. © 2023 by the authors

    Learning Steering Bounds for Parallel Autonomous Systems

    No full text
    Deep learning has been successfully applied to “end-to-end” learning of the autonomous driving task, where a deep neural network learns to predict steering control commands from camera data input. While these previous works support reactionary control, the representation learned is not usable for higher-level decision making required for autonomous navigation. This paper tackles the problem of learning a representation to predict a continuous control probability distribution, and thus steering control options and bounds for those options, which can be used for autonomous navigation. Each mode of the distribution encodes a possible macro-action that the system could execute at that instant, and the covariances of the modes place bounds on safe steering control values. Our approach has the added advantage of being trained on unlabeled data collected from inexpensive cameras. The deep neural network based algorithm generates a probability distribution over the space of steering angles, from which we leverage Variational Bayesian methods to extract a mixture model and compute the different possible actions in the environment. A bound, which the autonomous vehicle must respect in our parallel autonomy setting, is then computed for each of these actions. We evaluate our approach on a challenging dataset containing a wide variety of driving conditions, and show that our algorithm is capable of parameterizing Gaussian Mixture Models for possible actions, and extract steering bounds with a mean error of only 2 degrees. Additionally, we demonstrate our system working on a full scale autonomous vehicle and evaluate its ability to successful handle various different parallel autonomy situations.Toyota Research Institut
    corecore