1,390 research outputs found
Local Patterns for Face Recognition
The main objective of the local pattern is to describe the image with high discriminative features so that the local pattern descriptors are more suitable for face recognition. The word “local” represents the measured image with the subregion and is the key in this chapter. Regardless of the techniques proposed, the local pattern is one of the most interesting areas in face recognition. The local facial descriptor is a local pattern that generates the descriptor by considering the subregion of an image. Techniques based on various combination methods from the local facial descriptors are not unusual. This chapter is concerned primarily to help the reader to develop a basic understanding of the local pattern descriptors and how they apply to face recognition. We begin to describe the outline of the local pattern in face recognition and its relative facial descriptors. Next, we give an introduction to the popular local patterns and establish examples to demonstrate the process of each method. To the end of this chapter, we conclude those methods with a discussion of issues related to the properties of the local patterns
Local descriptors for visual SLAM
We present a comparison of several local image descriptors in the context of visual
Simultaneous Localization and Mapping (SLAM). In visual SLAM a set of points in the
environment are extracted from images and used as landmarks. The points are represented
by local descriptors used to resolve the association between landmarks. In this paper, we
study the class separability of several descriptors under changes in viewpoint and scale.
Several experiments were carried out using sequences of images in 2D and 3D scenes
Scale Invariant Interest Points with Shearlets
Shearlets are a relatively new directional multi-scale framework for signal
analysis, which have been shown effective to enhance signal discontinuities
such as edges and corners at multiple scales. In this work we address the
problem of detecting and describing blob-like features in the shearlets
framework. We derive a measure which is very effective for blob detection and
closely related to the Laplacian of Gaussian. We demonstrate the measure
satisfies the perfect scale invariance property in the continuous case. In the
discrete setting, we derive algorithms for blob detection and keypoint
description. Finally, we provide qualitative justifications of our findings as
well as a quantitative evaluation on benchmark data. We also report an
experimental evidence that our method is very suitable to deal with compressed
and noisy images, thanks to the sparsity property of shearlets
Multifractal Scaling, Geometrical Diversity, and Hierarchical Structure in the Cool Interstellar Medium
Multifractal scaling (MFS) refers to structures that can be described as a
collection of interwoven fractal subsets which exhibit power-law spatial
scaling behavior with a range of scaling exponents (concentration, or
singularity, strengths) and dimensions. The existence of MFS implies an
underlying multiplicative (or hierarchical, or cascade) process. Panoramic
column density images of several nearby star- forming cloud complexes,
constructed from IRAS data and justified in an appendix, are shown to exhibit
such multifractal scaling, which we interpret as indirect but quantitative
evidence for nested hierarchical structure. The relation between the dimensions
of the subsets and their concentration strengths (the "multifractal spectrum'')
appears to satisfactorily order the observed regions in terms of the mixture of
geometries present: strong point-like concentrations, line- like filaments or
fronts, and space-filling diffuse structures. This multifractal spectrum is a
global property of the regions studied, and does not rely on any operational
definition of "clouds.'' The range of forms of the multifractal spectrum among
the regions studied implies that the column density structures do not form a
universality class, in contrast to indications for velocity and passive scalar
fields in incompressible turbulence, providing another indication that the
physics of highly compressible interstellar gas dynamics differs fundamentally
from incompressible turbulence. (Abstract truncated)Comment: 27 pages, (LaTeX), 13 figures, 1 table, submitted to Astrophysical
Journa
Forecasting with time series imaging
Feature-based time series representations have attracted substantial
attention in a wide range of time series analysis methods. Recently, the use of
time series features for forecast model averaging has been an emerging research
focus in the forecasting community. Nonetheless, most of the existing
approaches depend on the manual choice of an appropriate set of features.
Exploiting machine learning methods to extract features from time series
automatically becomes crucial in state-of-the-art time series analysis. In this
paper, we introduce an automated approach to extract time series features based
on time series imaging. We first transform time series into recurrence plots,
from which local features can be extracted using computer vision algorithms.
The extracted features are used for forecast model averaging. Our experiments
show that forecasting based on automatically extracted features, with less
human intervention and a more comprehensive view of the raw time series data,
yields highly comparable performances with the best methods in the largest
forecasting competition dataset (M4) and outperforms the top methods in the
Tourism forecasting competition dataset
Hierarchical object detection with deep reinforcement learning
We present a method for performing hierarchical object detection in images guided by a deep reinforcement learning agent. The key idea is to focus on those parts of the image that contain richer information and zoom on them. We train an intelligent agent that, given an image window, is capable of deciding where to focus the attention among five different predefined region candidates (smaller windows). This procedure is iterated providing a hierarchical image analysis.
We compare two different candidate proposal strategies to guide the object search: with and without overlap. Moreover, our work compares two different strategies to extract features from a convolutional neural network for each region proposal: a first one that computes new feature maps for each region proposal, and a second one that computes the feature maps for the whole image to later generate crops for each region proposal.
Experiments indicate better results for the overlapping candidate proposal strategy and a loss of performance for the cropped image features due to the loss of spatial resolution. We argue that, while this loss seems unavoidable when working with large amounts of object candidates, the much more reduced amount of region proposals generated by our reinforcement learning agent allows considering to extract features for each location without sharing convolutional computation among regions.Postprint (published version
- …