95 research outputs found
Vanishing point detection for road detection
International audienceGiven a single image of an arbitrary road, that may not be well-paved, or have clearly delineated edges, or some a priori known color or texture distribution, is it possible for a computer to find this road? This paper addresses this question by decomposing the road detection process into two steps: the estimation of the vanishing point associated with the main (straight) part of the road, followed by the segmentation of the corresponding road area based on the detected vanishing point. The main technical contributions of the proposed approach are a novel adaptive soft voting scheme based on variable-sized voting region using confidence-weighted Gabor filters, which compute the dominant texture orientation at each pixel, and a new vanishing-point-constrained edge detection technique for detecting road boundaries. The proposed method has been implemented, and experiments with 1003 general road images demonstrate that it is both computationally efficient and effective at detecting road regions in challenging conditions
MapPrior: Bird's-Eye View Map Layout Estimation with Generative Models
Despite tremendous advancements in bird's-eye view (BEV) perception, existing
models fall short in generating realistic and coherent semantic map layouts,
and they fail to account for uncertainties arising from partial sensor
information (such as occlusion or limited coverage). In this work, we introduce
MapPrior, a novel BEV perception framework that combines a traditional
discriminative BEV perception model with a learned generative model for
semantic map layouts. Our MapPrior delivers predictions with better accuracy,
realism, and uncertainty awareness. We evaluate our model on the large-scale
nuScenes benchmark. At the time of submission, MapPrior outperforms the
strongest competing method, with significantly improved MMD and ECE scores in
camera- and LiDAR-based BEV perception
VAD: Vectorized Scene Representation for Efficient Autonomous Driving
Autonomous driving requires a comprehensive understanding of the surrounding
environment for reliable trajectory planning. Previous works rely on dense
rasterized scene representation (e.g., agent occupancy and semantic map) to
perform planning, which is computationally intensive and misses the
instance-level structure information. In this paper, we propose VAD, an
end-to-end vectorized paradigm for autonomous driving, which models the driving
scene as a fully vectorized representation. The proposed vectorized paradigm
has two significant advantages. On one hand, VAD exploits the vectorized agent
motion and map elements as explicit instance-level planning constraints which
effectively improves planning safety. On the other hand, VAD runs much faster
than previous end-to-end planning methods by getting rid of
computation-intensive rasterized representation and hand-designed
post-processing steps. VAD achieves state-of-the-art end-to-end planning
performance on the nuScenes dataset, outperforming the previous best method by
a large margin. Our base model, VAD-Base, greatly reduces the average collision
rate by 29.0% and runs 2.5x faster. Besides, a lightweight variant, VAD-Tiny,
greatly improves the inference speed (up to 9.3x) while achieving comparable
planning performance. We believe the excellent performance and the high
efficiency of VAD are critical for the real-world deployment of an autonomous
driving system. Code and models will be released for facilitating future
research.Comment: Code&Demos: https://github.com/hustvl/VA
Fast and robust road sign detection in driver assistance systems
© 2018, Springer Science+Business Media, LLC, part of Springer Nature. Road sign detection plays a critical role in automatic driver assistance systems. Road signs possess a number of unique visual qualities in images due to their specific colors and symmetric shapes. In this paper, road signs are detected by a two-level hierarchical framework that considers both color and shape of the signs. To address the problem of low image contrast, we propose a new color visual saliency segmentation algorithm, which uses the ratios of enhanced and normalized color values to capture color information. To improve computation efficiency and reduce false alarm rate, we modify the fast radial symmetry transform (RST) algorithm, and propose to use an edge pairwise voting scheme to group feature points based on their underlying symmetry in the candidate regions. Experimental results on several benchmarking datasets demonstrate the superiority of our method over the state-of-the-arts on both efficiency and robustness
- …