8,727 research outputs found
Performance Assessment of Feature Detection Algorithms: A Methodology and Case Study on Corner Detectors
In this paper we describe a generic methodology for evaluating the labeling performance of feature detectors. We describe a method for generating a test set and apply the methodology to the performance assessment of three well-known corner detectors: the Kitchen-Rosenfeld, Paler et al. and Harris-Stephens corner detectors. The labeling deficiencies of each of these detectors is related to their discrimination ability between corners and various of the features which comprise the class of noncorners
Strengthening the Effectiveness of Pedestrian Detection with Spatially Pooled Features
We propose a simple yet effective approach to the problem of pedestrian
detection which outperforms the current state-of-the-art. Our new features are
built on the basis of low-level visual features and spatial pooling.
Incorporating spatial pooling improves the translational invariance and thus
the robustness of the detection process. We then directly optimise the partial
area under the ROC curve (\pAUC) measure, which concentrates detection
performance in the range of most practical importance. The combination of these
factors leads to a pedestrian detector which outperforms all competitors on all
of the standard benchmark datasets. We advance state-of-the-art results by
lowering the average miss rate from to on the INRIA benchmark,
to on the ETH benchmark, to on the TUD-Brussels
benchmark and to on the Caltech-USA benchmark.Comment: 16 pages. Appearing in Proc. European Conf. Computer Vision (ECCV)
201
HPatches: A benchmark and evaluation of handcrafted and learned local descriptors
In this paper, we propose a novel benchmark for evaluating local image
descriptors. We demonstrate that the existing datasets and evaluation protocols
do not specify unambiguously all aspects of evaluation, leading to ambiguities
and inconsistencies in results reported in the literature. Furthermore, these
datasets are nearly saturated due to the recent improvements in local
descriptors obtained by learning them from large annotated datasets. Therefore,
we introduce a new large dataset suitable for training and testing modern
descriptors, together with strictly defined evaluation protocols in several
tasks such as matching, retrieval and classification. This allows for more
realistic, and thus more reliable comparisons in different application
scenarios. We evaluate the performance of several state-of-the-art descriptors
and analyse their properties. We show that a simple normalisation of
traditional hand-crafted descriptors can boost their performance to the level
of deep learning based descriptors within a realistic benchmarks evaluation
Unsupervised edge map scoring: a statistical complexity approach
We propose a new Statistical Complexity Measure (SCM) to qualify edge maps
without Ground Truth (GT) knowledge. The measure is the product of two indices,
an \emph{Equilibrium} index obtained by projecting the edge map
into a family of edge patterns, and an \emph{Entropy} index ,
defined as a function of the Kolmogorov Smirnov (KS) statistic.
This new measure can be used for performance characterization which includes:
(i)~the specific evaluation of an algorithm (intra-technique process) in order
to identify its best parameters, and (ii)~the comparison of different
algorithms (inter-technique process) in order to classify them according to
their quality.
Results made over images of the South Florida and Berkeley databases show
that our approach significantly improves over Pratt's Figure of Merit (PFoM)
which is the objective reference-based edge map evaluation standard, as it
takes into account more features in its evaluation
Graph Laplacian for Image Anomaly Detection
Reed-Xiaoli detector (RXD) is recognized as the benchmark algorithm for image
anomaly detection; however, it presents known limitations, namely the
dependence over the image following a multivariate Gaussian model, the
estimation and inversion of a high-dimensional covariance matrix, and the
inability to effectively include spatial awareness in its evaluation. In this
work, a novel graph-based solution to the image anomaly detection problem is
proposed; leveraging the graph Fourier transform, we are able to overcome some
of RXD's limitations while reducing computational cost at the same time. Tests
over both hyperspectral and medical images, using both synthetic and real
anomalies, prove the proposed technique is able to obtain significant gains
over performance by other algorithms in the state of the art.Comment: Published in Machine Vision and Applications (Springer
Maximum-entropy Surrogation in Network Signal Detection
Multiple-channel detection is considered in the context of a sensor network
where raw data are shared only by nodes that have a common edge in the network
graph. Established multiple-channel detectors, such as those based on
generalized coherence or multiple coherence, use pairwise measurements from
every pair of sensors in the network and are thus directly applicable only to
networks whose graphs are completely connected. An approach introduced here
uses a maximum-entropy technique to formulate surrogate values for missing
measurements corresponding to pairs of nodes that do not share an edge in the
network graph. The broader potential merit of maximum-entropy baselines in
quantifying the value of information in sensor network applications is also
noted.Comment: 4 pages, submitted to IEEE Statistical Signal Processing Workshop,
August 201
- …