12 research outputs found
Fast traffic sign recognition using color segmentation and deep convolutional networks
The use of Computer Vision techniques for the automatic
recognition of road signs is fundamental for the development of intelli-
gent vehicles and advanced driver assistance systems. In this paper, we
describe a procedure based on color segmentation, Histogram of Ori-
ented Gradients (HOG), and Convolutional Neural Networks (CNN) for
detecting and classifying road signs. Detection is speeded up by a pre-
processing step to reduce the search space, while classication is carried
out by using a Deep Learning technique. A quantitative evaluation of the
proposed approach has been conducted on the well-known German Traf-
c Sign data set and on the novel Data set of Italian Trac Signs (DITS),
which is publicly available and contains challenging sequences captured
in adverse weather conditions and in an urban scenario at night-time.
Experimental results demonstrate the eectiveness of the proposed ap-
proach in terms of both classication accuracy and computational speed
Total Recall: Understanding Traffic Signs using Deep Hierarchical Convolutional Neural Networks
Recognizing Traffic Signs using intelligent systems can drastically reduce
the number of accidents happening world-wide. With the arrival of Self-driving
cars it has become a staple challenge to solve the automatic recognition of
Traffic and Hand-held signs in the major streets. Various machine learning
techniques like Random Forest, SVM as well as deep learning models has been
proposed for classifying traffic signs. Though they reach state-of-the-art
performance on a particular data-set, but fall short of tackling multiple
Traffic Sign Recognition benchmarks. In this paper, we propose a novel and
one-for-all architecture that aces multiple benchmarks with better overall
score than the state-of-the-art architectures. Our model is made of residual
convolutional blocks with hierarchical dilated skip connections joined in
steps. With this we score 99.33% Accuracy in German sign recognition benchmark
and 99.17% Accuracy in Belgian traffic sign classification benchmark. Moreover,
we propose a newly devised dilated residual learning representation technique
which is very low in both memory and computational complexity
On The Effect of Hyperedge Weights On Hypergraph Learning
Hypergraph is a powerful representation in several computer vision, machine
learning and pattern recognition problems. In the last decade, many researchers
have been keen to develop different hypergraph models. In contrast, no much
attention has been paid to the design of hyperedge weights. However, many
studies on pairwise graphs show that the choice of edge weight can
significantly influence the performances of such graph algorithms. We argue
that this also applies to hypegraphs. In this paper, we empirically discuss the
influence of hyperedge weight on hypegraph learning via proposing three novel
hyperedge weights from the perspectives of geometry, multivariate statistical
analysis and linear regression. Extensive experiments on ORL, COIL20, JAFFE,
Sheffield, Scene15 and Caltech256 databases verify our hypothesis. Similar to
graph learning, several representative hyperedge weighting schemes can be
concluded by our experimental studies. Moreover, the experiments also
demonstrate that the combinations of such weighting schemes and conventional
hypergraph models can get very promising classification and clustering
performances in comparison with some recent state-of-the-art algorithms
State–of–the–art report on nonlinear representation of sources and channels
This report consists of two complementary parts, related to the modeling of two important sources of nonlinearities in a communications system. In the first part, an overview of important past work related to the estimation, compression and processing of sparse data through the use of nonlinear models is provided. In the second part, the current state of the art on the representation of wireless channels in the presence of nonlinearities is summarized. In addition to the characteristics of the nonlinear wireless fading channel, some information is also provided on recent approaches to the sparse representation of such channels
DLDR: Deep Linear Discriminative Retrieval for Cultural Event Classification from a Single Image
In this paper we tackle the classification of cultural events from a single image with a deep learning based method. We use convolutional neural networks (CNNs) with VGG-16 architecture [17], pretrained on ImageNet or the Places205 dataset for image classification, and fine-tuned on cultural events data. CNN features are robustly extracted at 4 different layers in each image. At each layer Linear Discriminant Analysis (LDA) is employed for discrimina-tive dimensionality reduction. An image is represented by the concatenated LDA-projected features from all layers or by the concatenation of CNN pooled features at each layer. The classification is then performed through the Iterative Nearest Neighbors-based Classifier (INNC) [20]. Classi-fication scores are obtained for different image representa-tion setups at train and test. The average of the scores is the output of our deep linear discriminative retrieval (DLDR) system. With 0.80 mean average precision (mAP) DLDR is a top entry for the ChaLearn LAP 2015 cultural event recognition challenge. 1
NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision Research
We introduce the Never Ending VIsual-classification Stream (NEVIS'22), a
benchmark consisting of a stream of over 100 visual classification tasks,
sorted chronologically and extracted from papers sampled uniformly from
computer vision proceedings spanning the last three decades. The resulting
stream reflects what the research community thought was meaningful at any point
in time. Despite being limited to classification, the resulting stream has a
rich diversity of tasks from OCR, to texture analysis, crowd counting, scene
recognition, and so forth. The diversity is also reflected in the wide range of
dataset sizes, spanning over four orders of magnitude. Overall, NEVIS'22 poses
an unprecedented challenge for current sequential learning approaches due to
the scale and diversity of tasks, yet with a low entry barrier as it is limited
to a single modality and each task is a classical supervised learning problem.
Moreover, we provide a reference implementation including strong baselines and
a simple evaluation protocol to compare methods in terms of their trade-off
between accuracy and compute. We hope that NEVIS'22 can be useful to
researchers working on continual learning, meta-learning, AutoML and more
generally sequential learning, and help these communities join forces towards
more robust and efficient models that efficiently adapt to a never ending
stream of data. Implementations have been made available at
https://github.com/deepmind/dm_nevis
Potential of Vision Transformers for Advanced Driver-Assistance Systems: An Evaluative Approach
In this thesis, we examine the performance of Vision Transformers concerning the current state of Advanced Driving Assistance Systems (ADAS). We explore the Vision Transformer model and its variants on the problems of vehicle computer vision. Vision transformers show performance competitive to convolutional neural networks but require much more training data. Vision transformers are also more robust to image permutations than CNNs. Additionally, Vision Transformers have a lower pre-training compute cost but can overfit on smaller datasets more easily than CNNs. Thus we apply this knowledge to tune Vision transformers on ADAS image datasets, including general traffic objects, vehicles, traffic lights, and traffic signs. We compare the performance of Vision Transformers on this problem to existing convolutional neural network approaches to determine the viability of Vision Transformer usage