4,358 research outputs found
Image Parsing with a Wide Range of Classes and Scene-Level Context
This paper presents a nonparametric scene parsing approach that improves the
overall accuracy, as well as the coverage of foreground classes in scene
images. We first improve the label likelihood estimates at superpixels by
merging likelihood scores from different probabilistic classifiers. This boosts
the classification performance and enriches the representation of
less-represented classes. Our second contribution consists of incorporating
semantic context in the parsing process through global label costs. Our method
does not rely on image retrieval sets but rather assigns a global likelihood
estimate to each label, which is plugged into the overall energy function. We
evaluate our system on two large-scale datasets, SIFTflow and LMSun. We achieve
state-of-the-art performance on the SIFTflow dataset and near-record results on
LMSun.Comment: Published at CVPR 2015, Computer Vision and Pattern Recognition
(CVPR), 2015 IEEE Conference o
A Novel Translation, Rotation, and Scale-Invariant Shape Description Method for Real-Time Speed-Limit Sign Recognition
[[abstract]]Speed-limit sign (SLS) recognition is an important function to realize automatic driving assistance systems (ADAS). This paper presents a novel design of an image-based SLS recognition algorithm, which can efficiently detect and recognize SLS in real-time. To improve the robustness of the proposed SLS algorithm, this paper also proposes a new shape description method to describe the detected SLS using centroid-to-contour (CtC) distances of the sign content. The proposed CtC descriptor is invariant to translation, rotation, and scale variations of the SLS in the image. This advantage increases the recognition rate of a linear support vector machine classifier. The proposed SLS recognition method had been implemented and tested on an ARM-based embedded platform. Experimental results validate the SLS recognition accuracy and real-time performance of the proposed method.[[notice]]補æ£å®Œ
Adaptive Nonparametric Image Parsing
In this paper, we present an adaptive nonparametric solution to the image
parsing task, namely annotating each image pixel with its corresponding
category label. For a given test image, first, a locality-aware retrieval set
is extracted from the training data based on super-pixel matching similarities,
which are augmented with feature extraction for better differentiation of local
super-pixels. Then, the category of each super-pixel is initialized by the
majority vote of the -nearest-neighbor super-pixels in the retrieval set.
Instead of fixing as in traditional non-parametric approaches, here we
propose a novel adaptive nonparametric approach which determines the
sample-specific k for each test image. In particular, is adaptively set to
be the number of the fewest nearest super-pixels which the images in the
retrieval set can use to get the best category prediction. Finally, the initial
super-pixel labels are further refined by contextual smoothing. Extensive
experiments on challenging datasets demonstrate the superiority of the new
solution over other state-of-the-art nonparametric solutions.Comment: 11 page
Detection and Recognition of Traffic Signs Inside the Attentional Visual Field of Drivers
Traffic sign detection and recognition systems are essential components of Advanced Driver Assistance Systems and self-driving vehicles. In this contribution we present a vision-based framework which detects and recognizes traffic signs inside the attentional visual field of drivers. This technique takes advantage of the driver\u27s 3D absolute gaze point obtained through the combined use of a front-view stereo imaging system and a non-contact 3D gaze tracker. We used a linear Support Vector Machine as a classifier and a Histogram of Oriented Gradient as features for detection. Recognition is performed by using Scale Invariant Feature Transforms and color information. Our technique detects and recognizes signs which are in the field of view of the driver and also provides indication when one or more signs have been missed by the driver
Recommended from our members
Semantic Concept Co-Occurrence Patterns for Image Annotation and Retrieval.
Describing visual image contents by semantic concepts is an effective and straightforward way to facilitate various high level applications. Inferring semantic concepts from low-level pictorial feature analysis is challenging due to the semantic gap problem, while manually labeling concepts is unwise because of a large number of images in both online and offline collections. In this paper, we present a novel approach to automatically generate intermediate image descriptors by exploiting concept co-occurrence patterns in the pre-labeled training set that renders it possible to depict complex scene images semantically. Our work is motivated by the fact that multiple concepts that frequently co-occur across images form patterns which could provide contextual cues for individual concept inference. We discover the co-occurrence patterns as hierarchical communities by graph modularity maximization in a network with nodes and edges representing concepts and co-occurrence relationships separately. A random walk process working on the inferred concept probabilities with the discovered co-occurrence patterns is applied to acquire the refined concept signature representation. Through experiments in automatic image annotation and semantic image retrieval on several challenging datasets, we demonstrate the effectiveness of the proposed concept co-occurrence patterns as well as the concept signature representation in comparison with state-of-the-art approaches
- …