92 research outputs found
Fast Detection of Curved Edges at Low SNR
Detecting edges is a fundamental problem in computer vision with many
applications, some involving very noisy images. While most edge detection
methods are fast, they perform well only on relatively clean images. Indeed,
edges in such images can be reliably detected using only local filters.
Detecting faint edges under high levels of noise cannot be done locally at the
individual pixel level, and requires more sophisticated global processing.
Unfortunately, existing methods that achieve this goal are quite slow. In this
paper we develop a novel multiscale method to detect curved edges in noisy
images. While our algorithm searches for edges over a huge set of candidate
curves, it does so in a practical runtime, nearly linear in the total number of
image pixels. As we demonstrate experimentally, our algorithm is orders of
magnitude faster than previous methods designed to deal with high noise levels.
Nevertheless, it obtains comparable, if not better, edge detection quality on a
variety of challenging noisy images.Comment: 9 pages, 11 figure
Classic versus deep learning approaches to address computer vision challenges : a study of faint edge detection and multispectral image registration
Computer Vision involves many challenging problems.
While early work utilized classic methods, in recent years
solutions have often relied on deep neural networks. In this
study, we explore those two classes of methods through two applications that are at the limit of the ability of current computer
vision algorithms, i.e., faint edge detection and multispectral
image registration. We show that the detection of edges at a
low signal-to-noise ratio is a demanding task with proven lower
bounds. The introduced method processes straight and curved
edges in nearly linear complexity. Moreover, performance is of
high quality on noisy simulations, boundary datasets, and real
images. However, in order to improve accuracy and runtime, a
deep solution was also explored. It utilizes a multiscale neural
network for the detection of edges in binary images using edge
preservation loss. The second group of work that is considered
in this study addresses multispectral image alignment. Since
multispectral fusion is particularly informative, robust image
alignment algorithms are required. However, as this cannot be
carried out by single-channel registration methods, we propose
a traditional approach that relies on a novel edge descriptor using a feature-based registration scheme. Experiments demonstrate that, although it is able to align a wide field of spectral channels, it lacks robustness to deal with every geometric
transformation. To that end, we developed a deep approach for
such alignment. Contrarily to the previously suggested edge
descriptor, our deep approach uses an invariant representation
for spectral patches via metric learning that can be seen as a
teacher-student method. All those pieces of work are reported
in five published papers with state-of-the-art experimental results and proven theory. As a whole, this research reveals that,
while traditional methods are rooted in theoretical principles
and are robust to a wide field of images, deep approaches are
faster to run and achieve better performance if, not only sufficient training data are available, but also they are of the same
image type as the data on which they are applied
A statistical mechanical model of economics
Statistical mechanics pursues low-dimensional descriptions of systems with a very large number of degrees of freedom. I explore this theme in two contexts.
The main body of this dissertation explores and extends the Yard Sale Model (YSM) of economic transactions using a combination of simulations and theory. The YSM is a simple interacting model for wealth distributions which has the potential to explain the empirical observation of Pareto distributions of wealth. I develop the link between wealth condensation and the breakdown of ergodicity due to nonlinear diffusion effects which are analogous to the geometric random walk. Using this, I develop a deterministic effective theory of wealth transfer in the YSM that is useful for explaining many quantitative results.
I introduce various forms of growth to the model, paying attention to the effect of growth on wealth condensation, inequality, and ergodicity. Arithmetic growth is found to partially break condensation, and geometric growth is found to completely break condensation. Further generalizations of geometric growth with growth in- equality show that the system is divided into two phases by a tipping point in the inequality parameter. The tipping point marks the line between systems which are ergodic and systems which exhibit wealth condensation.
I explore generalizations of the YSM transaction scheme to arbitrary betting functions to develop notions of universality in YSM-like models. I find that wealth condensation is universal to a large class of models which can be divided into two phases. The first exhibits slow, power-law condensation dynamics, and the second exhibits fast, finite-time condensation dynamics. I find that the YSM, which exhibits exponential dynamics, is the critical, self-similar model which marks the dividing line between the two phases.
The final chapter develops a low-dimensional approach to materials microstructure quantification. Modern materials design harnesses complex microstructure effects to develop high-performance materials, but general microstructure quantification is an unsolved problem. Motivated by statistical physics, I envision microstructure as a low-dimensional manifold, and construct this manifold by leveraging multiple machine learning approaches including transfer learning, dimensionality reduction, and computer vision breakthroughs with convolutional neural networks
Learning Mid-Level Representations for Visual Recognition
The objective of this thesis is to enhance visual recognition for objects and scenes
through the development of novel mid-level representations and appendent learning
algorithms. In particular, this work is focusing on category level recognition which
is still a very challenging and mainly unsolved task. One crucial component in visual
recognition systems is the representation of objects and scenes. However, depending on
the representation, suitable learning strategies need to be developed that make it possible
to learn new categories automatically from training data. Therefore, the aim of this thesis
is to extend low-level representations by mid-level representations and to develop suitable
learning mechanisms.
A popular kind of mid-level representations are higher order statistics such as
self-similarity and co-occurrence statistics. While these descriptors are satisfying the
demand for higher-level object representations, they are also exhibiting very large and ever
increasing dimensionality. In this thesis a new object representation, based on curvature
self-similarity, is suggested that goes beyond the currently popular approximation of
objects using straight lines. However, like all descriptors using second order statistics,
it also exhibits a high dimensionality. Although improving discriminability, the high
dimensionality becomes a critical issue due to lack of generalization ability and curse
of dimensionality. Given only a limited amount of training data, even sophisticated
learning algorithms such as the popular kernel methods are not able to suppress noisy or
superfluous dimensions of such high-dimensional data. Consequently, there is a natural
need for feature selection when using present-day informative features and, particularly,
curvature self-similarity. We therefore suggest an embedded feature selection method for
support vector machines that reduces complexity and improves generalization capability
of object models. The proposed curvature self-similarity representation is successfully
integrated together with the embedded feature selection in a widely used state-of-the-art
object detection framework.
The influence of higher order statistics for category level object recognition, is further
investigated by learning co-occurrences between foreground and background, to reduce
the number of false detections. While the suggested curvature self-similarity descriptor
is improving the model for more detailed description of the foreground, higher order
statistics are now shown to be also suitable for explicitly modeling the background.
This is of particular use for the popular chamfer matching technique, since it is prone
to accidental matches in dense clutter. As clutter only interferes with the foreground
model contour, we learn where to place the background contours with respect to the
foreground object boundary. The co-occurrence of background contours is integrated
into a max-margin framework. Thus the suggested approach combines the advantages of
accurately detecting object parts via chamfer matching and the robustness of max-margin
learning.
While chamfer matching is very efficient technique for object detection, parts are only
detected based on a simple distance measure. Contrary to that, mid-level parts and
patches are explicitly trained to distinguish true positives in the foreground from false
positives in the background. Due to the independence of mid-level patches and parts it
is possible to train a large number of instance specific part classifiers. This is contrary
to the current most powerful discriminative approaches that are typically only feasible
for a small number of parts, as they are modeling the spatial dependencies between
them. Due to their number, we cannot directly train a powerful classifier to combine
all parts. Instead, parts are randomly grouped into fewer, overlapping compositions that
are trained using a maximum-margin approach. In contrast to the common rationale of
compositional approaches, we do not aim for semantically meaningful ensembles. Rather
we seek randomized compositions that are discriminative and generalize over all instances
of a category. Compositions are all combined by a non-linear decision function which is
completing the powerful hierarchy of discriminative classifiers.
In summary, this thesis is improving visual recognition of objects and scenes, by
developing novel mid-level representations on top of different kinds of low-level
representations. Furthermore, it investigates in the development of suitable learning
algorithms, to deal with the new challenges that are arising form the novel object
representations presented in this work
- …