53 research outputs found

    LiDAR point-cloud processing based on projection methods: a comparison

    Full text link
    An accurate and rapid-response perception system is fundamental for autonomous vehicles to operate safely. 3D object detection methods handle point clouds given by LiDAR sensors to provide accurate depth and position information for each detection, together with its dimensions and classification. The information is then used to track vehicles and other obstacles in the surroundings of the autonomous vehicle, and also to feed control units that guarantee collision avoidance and motion planning. Nowadays, object detection systems can be divided into two main categories. The first ones are the geometric based, which retrieve the obstacles using geometric and morphological operations on the 3D points. The seconds are the deep learning-based, which process the 3D points, or an elaboration of the 3D point-cloud, with deep learning techniques to retrieve a set of obstacles. This paper presents a comparison between those two approaches, presenting one implementation of each class on a real autonomous vehicle. Accuracy of the estimates of the algorithms has been evaluated with experimental tests carried in the Monza ENI circuit. The position of the ego vehicle and the obstacle is given by GPS sensors with RTK correction, which guarantees an accurate ground truth for the comparison. Both algorithms have been implemented on ROS and run on a consumer laptop

    Advances in centerline estimation for autonomous lateral control

    Full text link
    The ability of autonomous vehicles to maintain an accurate trajectory within their road lane is crucial for safe operation. This requires detecting the road lines and estimating the car relative pose within its lane. Lateral lines are usually retrieved from camera images. Still, most of the works on line detection are limited to image mask retrieval and do not provide a usable representation in world coordinates. What we propose in this paper is a complete perception pipeline based on monocular vision and able to retrieve all the information required by a vehicle lateral control system: road lines equation, centerline, vehicle heading and lateral displacement. We evaluate our system by acquiring data with accurate geometric ground truth. To act as a benchmark for further research, we make this new dataset publicly available at http://airlab.deib.polimi.it/datasets/.Comment: Presented at 2020 IEEE Intelligent Vehicles Symposium (IV), 8 pages, 8 figure

    Retrospective evaluation of whole exome and genome mutation calls in 746 cancer samples

    No full text
    Funder: NCI U24CA211006Abstract: The Cancer Genome Atlas (TCGA) and International Cancer Genome Consortium (ICGC) curated consensus somatic mutation calls using whole exome sequencing (WES) and whole genome sequencing (WGS), respectively. Here, as part of the ICGC/TCGA Pan-Cancer Analysis of Whole Genomes (PCAWG) Consortium, which aggregated whole genome sequencing data from 2,658 cancers across 38 tumour types, we compare WES and WGS side-by-side from 746 TCGA samples, finding that ~80% of mutations overlap in covered exonic regions. We estimate that low variant allele fraction (VAF < 15%) and clonal heterogeneity contribute up to 68% of private WGS mutations and 71% of private WES mutations. We observe that ~30% of private WGS mutations trace to mutations identified by a single variant caller in WES consensus efforts. WGS captures both ~50% more variation in exonic regions and un-observed mutations in loci with variable GC-content. Together, our analysis highlights technological divergences between two reproducible somatic variant detection efforts

    Autonomous steer actuation for an urban quadricycle

    No full text

    Teleoperated Vehicle-Perspective Predictive Display Accounting for Network Time Delays

    No full text
    Variable network time-delays in data-transmission is the major problem in tele-operating a vehicle. Even on LTE network, variability of these delays is high (70-150 ms ping). This paper presents an innovative approach of providing the remote operator a forecasted video stream which replicates future perspective of vehicle’s FOV upon reception of maneuvering commands. First, vehicle position is predicted accounting for its speed and data transmission delays. Then perspective image transformation2 is performed to get exact new perspective of vehicle FOV corresponds to the predicted position. This approach addresses both issues, time-delays as well as its variability. Only one display, which shows frontward FOV is availed for mock-up

    NMPC trajectory planner for urban autonomous driving

    No full text
    This paper presents a trajectory planner for autonomous driving based on a Nonlinear Model Predictive Control (NMPC) algorithm that accounts for Pacejka's nonlinear lateral tyre dynamics as well as for zero speed conditions through a novel slip angle calculation. In the NMPC framework, road boundaries and obstacles (both static and moving) are taken into account with soft and hard constraints implementation. The numerical solution of the NMPC problem is carried out using ACADO toolkit coupled with the quadratic programming solver qpOASES. The effectiveness of the proposed NMPC trajectory planner has been tested using CarMaker multibody models. The formulation of vehicle, road and obstacles' models has been specifically tailored to obtain a continuous and differentiable optimisation problem. This allows to achieve a computationally efficient implementation by exploiting automatic differentiation. Moreover, robustness is improved by means of a parallelised implementation of multiple instances of the planning algorithm with different spatial horizon lengths. Time analysis and performance results obtained in closed-loop simulations show that the proposed algorithm can be implemented within a real-time control framework of an autonomous vehicle.ISSN:0042-3114ISSN:1744-515

    End-to-End Learning of Autonomous Vehicle Lateral Control via MPC Training

    No full text
    One of the main requirements of an autonomous vehicle is the ability to maintain its trajectory within the road lane. This task is generally performed utilizing vision data, processed using convolutional neural networks or classical computer vision algorithms to extract a road mask. A software pipeline then analyzes this mask to retrieve the vehicle's relative state. This process is composed of many components that need to be tuned to achieve good results. What is proposed in this paper is instead an end-to-end solution able to infer the steering command directly from camera images. Differently from the classical end-to-end machine-learning approaches, the architecture is not trained using as ground truth the car data from a human driver, but instead the output of a control algorithm. The network then does not mimic a specific human behavior but learns how to achieve the optimal trajectory computed by the algorithm in an end-to-end fashion
    • …
    corecore