17,797 research outputs found
A Discriminatively Learned CNN Embedding for Person Re-identification
We revisit two popular convolutional neural networks (CNN) in person
re-identification (re-ID), i.e, verification and classification models. The two
models have their respective advantages and limitations due to different loss
functions. In this paper, we shed light on how to combine the two models to
learn more discriminative pedestrian descriptors. Specifically, we propose a
new siamese network that simultaneously computes identification loss and
verification loss. Given a pair of training images, the network predicts the
identities of the two images and whether they belong to the same identity. Our
network learns a discriminative embedding and a similarity measurement at the
same time, thus making full usage of the annotations. Albeit simple, the
learned embedding improves the state-of-the-art performance on two public
person re-ID benchmarks. Further, we show our architecture can also be applied
in image retrieval
Driver Distraction Identification with an Ensemble of Convolutional Neural Networks
The World Health Organization (WHO) reported 1.25 million deaths yearly due
to road traffic accidents worldwide and the number has been continuously
increasing over the last few years. Nearly fifth of these accidents are caused
by distracted drivers. Existing work of distracted driver detection is
concerned with a small set of distractions (mostly, cell phone usage).
Unreliable ad-hoc methods are often used.In this paper, we present the first
publicly available dataset for driver distraction identification with more
distraction postures than existing alternatives. In addition, we propose a
reliable deep learning-based solution that achieves a 90% accuracy. The system
consists of a genetically-weighted ensemble of convolutional neural networks,
we show that a weighted ensemble of classifiers using a genetic algorithm
yields in a better classification confidence. We also study the effect of
different visual elements in distraction detection by means of face and hand
localizations, and skin segmentation. Finally, we present a thinned version of
our ensemble that could achieve 84.64% classification accuracy and operate in a
real-time environment.Comment: arXiv admin note: substantial text overlap with arXiv:1706.0949
Distinguishing Computer-generated Graphics from Natural Images Based on Sensor Pattern Noise and Deep Learning
Computer-generated graphics (CGs) are images generated by computer software.
The~rapid development of computer graphics technologies has made it easier to
generate photorealistic computer graphics, and these graphics are quite
difficult to distinguish from natural images (NIs) with the naked eye. In this
paper, we propose a method based on sensor pattern noise (SPN) and deep
learning to distinguish CGs from NIs. Before being fed into our convolutional
neural network (CNN)-based model, these images---CGs and NIs---are clipped into
image patches. Furthermore, three high-pass filters (HPFs) are used to remove
low-frequency signals, which represent the image content. These filters are
also used to reveal the residual signal as well as SPN introduced by the
digital camera device. Different from the traditional methods of distinguishing
CGs from NIs, the proposed method utilizes a five-layer CNN to classify the
input image patches. Based on the classification results of the image patches,
we deploy a majority vote scheme to obtain the classification results for the
full-size images. The~experiments have demonstrated that (1) the proposed
method with three HPFs can achieve better results than that with only one HPF
or no HPF and that (2) the proposed method with three HPFs achieves 100\%
accuracy, although the NIs undergo a JPEG compression with a quality factor of
75.Comment: This paper has been published by Sensors. doi:10.3390/s18041296;
Sensors 2018, 18(4), 129
Vehicle-Rear: A New Dataset to Explore Feature Fusion for Vehicle Identification Using Convolutional Neural Networks
This work addresses the problem of vehicle identification through
non-overlapping cameras. As our main contribution, we introduce a novel dataset
for vehicle identification, called Vehicle-Rear, that contains more than three
hours of high-resolution videos, with accurate information about the make,
model, color and year of nearly 3,000 vehicles, in addition to the position and
identification of their license plates. To explore our dataset we design a
two-stream CNN that simultaneously uses two of the most distinctive and
persistent features available: the vehicle's appearance and its license plate.
This is an attempt to tackle a major problem: false alarms caused by vehicles
with similar designs or by very close license plate identifiers. In the first
network stream, shape similarities are identified by a Siamese CNN that uses a
pair of low-resolution vehicle patches recorded by two different cameras. In
the second stream, we use a CNN for OCR to extract textual information,
confidence scores, and string similarities from a pair of high-resolution
license plate patches. Then, features from both streams are merged by a
sequence of fully connected layers for decision. In our experiments, we
compared the two-stream network against several well-known CNN architectures
using single or multiple vehicle features. The architectures, trained models,
and dataset are publicly available at https://github.com/icarofua/vehicle-rear
- …