772 research outputs found

    Dynamic Control Barrier Function-based Model Predictive Control to Safety-Critical Obstacle-Avoidance of Mobile Robot

    Full text link
    This paper presents an efficient and safe method to avoid static and dynamic obstacles based on LiDAR. First, point cloud is used to generate a real-time local grid map for obstacle detection. Then, obstacles are clustered by DBSCAN algorithm and enclosed with minimum bounding ellipses (MBEs). In addition, data association is conducted to match each MBE with the obstacle in the current frame. Considering MBE as an observation, Kalman filter (KF) is used to estimate and predict the motion state of the obstacle. In this way, the trajectory of each obstacle in the forward time domain can be parameterized as a set of ellipses. Due to the uncertainty of the MBE, the semi-major and semi-minor axes of the parameterized ellipse are extended to ensure safety. We extend the traditional Control Barrier Function (CBF) and propose Dynamic Control Barrier Function (D-CBF). We combine D-CBF with Model Predictive Control (MPC) to implement safety-critical dynamic obstacle avoidance. Experiments in simulated and real scenarios are conducted to verify the effectiveness of our algorithm. The source code is released for the reference of the community.Comment: Submitted to IEEE International Conference on Robotics and Automation (ICRA) 202

    Formation Control Using Vehicle Operational Envelopes and Behavior-Based Dual-Mode Model Predictive Control

    Get PDF
    This thesis presents a control framework for formation control. Given an initial desired trajectory, a framework is presented to generate trajectories for each vehicle within the formation. When combined with an operational envelope, a designated area for each vehicle to maneuver, for each vehicle the multi-vehicle formation control problem can be redefined into a single vehicle problem. A single vehicle framework is presented to track the respective trajectory when possible, or stay near it when it passes through previously unknown obstacles. Arc-based motions are used to rapidly produce desirable robot controls while a trajectory tracking motion is used to ensure that the vehicle tracks the trajectory when it is obstacle free. The resulting formation control framework is illustrated through a real-time simulation with trajectories passing through obstacles. The simulated robot is able to seamlessly balance tracking with obstacle avoidance

    Sensor Fusion for Object Detection and Tracking in Autonomous Vehicles

    Get PDF
    Autonomous driving vehicles depend on their perception system to understand the environment and identify all static and dynamic obstacles surrounding the vehicle. The perception system in an autonomous vehicle uses the sensory data obtained from different sensor modalities to understand the environment and perform a variety of tasks such as object detection and object tracking. Combining the outputs of different sensors to obtain a more reliable and robust outcome is called sensor fusion. This dissertation studies the problem of sensor fusion for object detection and object tracking in autonomous driving vehicles and explores different approaches for utilizing deep neural networks to accurately and efficiently fuse sensory data from different sensing modalities. In particular, this dissertation focuses on fusing radar and camera data for 2D and 3D object detection and object tracking tasks. First, the effectiveness of radar and camera fusion for 2D object detection is investigated by introducing a radar region proposal algorithm for generating object proposals in a two-stage object detection network. The evaluation results show significant improvement in speed and accuracy compared to a vision-based proposal generation method. Next, radar and camera fusion is used for the task of joint object detection and depth estimation where the radar data is used in conjunction with image features to generate object proposals, but also provides accurate depth estimation for the detected objects in the scene. A fusion algorithm is also proposed for 3D object detection where where the depth and velocity data obtained from the radar is fused with the camera images to detect objects in 3D and also accurately estimate their velocities without requiring any temporal information. Finally, radar and camera sensor fusion is used for 3D multi-object tracking by introducing an end-to-end trainable and online network capable of tracking objects in real-time

    Vehicle Motion Forecasting using Prior Information and Semantic-assisted Occupancy Grid Maps

    Full text link
    Motion prediction is a challenging task for autonomous vehicles due to uncertainty in the sensor data, the non-deterministic nature of future, and complex behavior of agents. In this paper, we tackle this problem by representing the scene as dynamic occupancy grid maps (DOGMs), associating semantic labels to the occupied cells and incorporating map information. We propose a novel framework that combines deep-learning-based spatio-temporal and probabilistic approaches to predict vehicle behaviors.Contrary to the conventional OGM prediction methods, evaluation of our work is conducted against the ground truth annotations. We experiment and validate our results on real-world NuScenes dataset and show that our model shows superior ability to predict both static and dynamic vehicles compared to OGM predictions. Furthermore, we perform an ablation study and assess the role of semantic labels and map in the architecture.Comment: Accepted to the 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2023

    SEDIMENT TRANSPORT AND THE TEMPORAL STABILITY OF THE SEAFLOOR IN THE HAMPTON-SEABROOK ESTUARY, NH: A NUMERICAL MODEL STUDY

    Get PDF
    Observations of sediment transport pathways and bathymetric change are often difficult to obtain over spatial and temporal scales needed to maintain economic and ecological viability in dynamic coastal and estuarine environments. As a consequence, numerical models have become a useful tool to examine the sediment transport and evolution of inlets, estuaries, and harbors. In this work, sediment transport at the Hampton-Seabrook Estuary (HSE) in southern New Hampshire is simulated using the Coupled Ocean Atmospheric Waves and Sediment Transport (COAWST) modeling framework to assess bathymetric change over a 5-year period from September 2011 to November 2016. Initial bathymetry and sediment grain size distribution are established from observations and smoothed onto a 30 m rectilinear grid that encompasses the entirety of the HSE system and extends two km offshore into the Gulf of Maine. Careful consideration is made to include hardened structures, such as jetties and sub-surface bulkheads, into the model framework. The model is forced with observations of water levels (including subtidal and tidal motions) from a local tide gauge. Field observations of sea surface height and currents are used to validate model hydrodynamics and establish bottom boundary conditions. The verified model predicts bathymetric change in the harbor consistent with observed changes obtained from bathymetric surveys conducted at the beginning and end of the five-year study. Of particular interest is a cut through the middle ground of the flood tidal delta and the filling in of the navigational channel leading to the Seabrook side of the Harbor that is qualitatively well reproduced by the model. In general, the model qualitatively well-predicts the gross 5-year evolution of the flood tidal delta and the channels leading to the upstream rivers suggesting that hydrodynamically-verified numerical models can be used to qualitatively predict depositional and erosional regions over inter-annual time scales at Hampton Harbor

    SemanticBEVFusion: Rethink LiDAR-Camera Fusion in Unified Bird's-Eye View Representation for 3D Object Detection

    Full text link
    LiDAR and camera are two essential sensors for 3D object detection in autonomous driving. LiDAR provides accurate and reliable 3D geometry information while the camera provides rich texture with color. Despite the increasing popularity of fusing these two complementary sensors, the challenge remains in how to effectively fuse 3D LiDAR point cloud with 2D camera images. Recent methods focus on point-level fusion which paints the LiDAR point cloud with camera features in the perspective view or bird's-eye view (BEV)-level fusion which unifies multi-modality features in the BEV representation. In this paper, we rethink these previous fusion strategies and analyze their information loss and influences on geometric and semantic features. We present SemanticBEVFusion to deeply fuse camera features with LiDAR features in a unified BEV representation while maintaining per-modality strengths for 3D object detection. Our method achieves state-of-the-art performance on the large-scale nuScenes dataset, especially for challenging distant objects. The code will be made publicly available.Comment: The first two authors contributed equally to this wor

    ObVi-SLAM: Long-Term Object-Visual SLAM

    Full text link
    Robots responsible for tasks over long time scales must be able to localize consistently and scalably amid geometric, viewpoint, and appearance changes. Existing visual SLAM approaches rely on low-level feature descriptors that are not robust to such environmental changes and result in large map sizes that scale poorly over long-term deployments. In contrast, object detections are robust to environmental variations and lead to more compact representations, but most object-based SLAM systems target short-term indoor deployments with close objects. In this paper, we introduce ObVi-SLAM to overcome these challenges by leveraging the best of both approaches. ObVi-SLAM uses low-level visual features for high-quality short-term visual odometry; and to ensure global, long-term consistency, ObVi-SLAM builds an uncertainty-aware long-term map of persistent objects and updates it after every deployment. By evaluating ObVi-SLAM on data from 16 deployment sessions spanning different weather and lighting conditions, we empirically show that ObVi-SLAM generates accurate localization estimates consistent over long-time scales in spite of varying appearance conditions.Comment: 8 pages, 7 figures, 1 table plus appendix with 4 figures and 1 tabl
    • …
    corecore