26,038 research outputs found
Efficient Evaluation of the Number of False Alarm Criterion
This paper proposes a method for computing efficiently the significance of a
parametric pattern inside a binary image. On the one hand, a-contrario
strategies avoid the user involvement for tuning detection thresholds, and
allow one to account fairly for different pattern sizes. On the other hand,
a-contrario criteria become intractable when the pattern complexity in terms of
parametrization increases. In this work, we introduce a strategy which relies
on the use of a cumulative space of reduced dimensionality, derived from the
coupling of a classic (Hough) cumulative space with an integral histogram
trick. This space allows us to store partial computations which are required by
the a-contrario criterion, and to evaluate the significance with a lower
computational cost than by following a straightforward approach. The method is
illustrated on synthetic examples on patterns with various parametrizations up
to five dimensions. In order to demonstrate how to apply this generic concept
in a real scenario, we consider a difficult crack detection task in still
images, which has been addressed in the literature with various local and
global detection strategies. We model cracks as bounded segments, detected by
the proposed a-contrario criterion, which allow us to introduce additional
spatial constraints based on their relative alignment. On this application, the
proposed strategy yields state-of the-art results, and underlines its potential
for handling complex pattern detection tasks
Robust Classification for Imprecise Environments
In real-world environments it usually is difficult to specify target
operating conditions precisely, for example, target misclassification costs.
This uncertainty makes building robust classification systems problematic. We
show that it is possible to build a hybrid classifier that will perform at
least as well as the best available classifier for any target conditions. In
some cases, the performance of the hybrid actually can surpass that of the best
known classifier. This robust performance extends across a wide variety of
comparison frameworks, including the optimization of metrics such as accuracy,
expected cost, lift, precision, recall, and workforce utilization. The hybrid
also is efficient to build, to store, and to update. The hybrid is based on a
method for the comparison of classifier performance that is robust to imprecise
class distributions and misclassification costs. The ROC convex hull (ROCCH)
method combines techniques from ROC analysis, decision analysis and
computational geometry, and adapts them to the particulars of analyzing learned
classifiers. The method is efficient and incremental, minimizes the management
of classifier performance data, and allows for clear visual comparisons and
sensitivity analyses. Finally, we point to empirical evidence that a robust
hybrid classifier indeed is needed for many real-world problems.Comment: 24 pages, 12 figures. To be published in Machine Learning Journal.
For related papers, see http://www.hpl.hp.com/personal/Tom_Fawcett/ROCCH
Speaker segmentation and clustering
This survey focuses on two challenging speech processing topics, namely: speaker segmentation and speaker clustering. Speaker segmentation aims at finding speaker change points in an audio stream, whereas speaker clustering aims at grouping speech segments based on speaker characteristics. Model-based, metric-based, and hybrid speaker segmentation algorithms are reviewed. Concerning speaker clustering, deterministic and probabilistic algorithms are examined. A comparative assessment of the reviewed algorithms is undertaken, the algorithm advantages and disadvantages are indicated, insight to the algorithms is offered, and deductions as well as recommendations are given. Rich transcription and movie analysis are candidate applications that benefit from combined speaker segmentation and clustering. © 2007 Elsevier B.V. All rights reserved
- …