758 research outputs found
Exploiting Image-trained CNN Architectures for Unconstrained Video Classification
We conduct an in-depth exploration of different strategies for doing event
detection in videos using convolutional neural networks (CNNs) trained for
image classification. We study different ways of performing spatial and
temporal pooling, feature normalization, choice of CNN layers as well as choice
of classifiers. Making judicious choices along these dimensions led to a very
significant increase in performance over more naive approaches that have been
used till now. We evaluate our approach on the challenging TRECVID MED'14
dataset with two popular CNN architectures pretrained on ImageNet. On this
MED'14 dataset, our methods, based entirely on image-trained CNN features, can
outperform several state-of-the-art non-CNN models. Our proposed late fusion of
CNN- and motion-based features can further increase the mean average precision
(mAP) on MED'14 from 34.95% to 38.74%. The fusion approach achieves the
state-of-the-art classification performance on the challenging UCF-101 dataset
Image Classification with the Fisher Vector: Theory and Practice
A standard approach to describe an image for classification and retrieval purposes is to extract a set of local patch descriptors, encode them into a high dimensional vector and pool them into an image-level signature. The most common patch encoding strategy consists in quantizing the local descriptors into a finite set of prototypical elements. This leads to the popular Bag-of-Visual words (BOV) representation. In this work, we propose to use the Fisher Kernel framework as an alternative patch encoding strategy: we describe patches by their deviation from an ''universal'' generative Gaussian mixture model. This representation, which we call Fisher Vector (FV) has many advantages: it is efficient to compute, it leads to excellent results even with efficient linear classifiers, and it can be compressed with a minimal loss of accuracy using product quantization. We report experimental results on five standard datasets -- PASCAL VOC 2007, Caltech 256, SUN 397, ILSVRC 2010 and ImageNet10K -- with up to 9M images and 10K classes, showing that the FV framework is a state-of-the-art patch encoding technique
Diffusion, methods and applications
Tesis doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de lectura: junio de 2014Big Data, an important problem nowadays, can be understood in terms of a very large number of
patterns, a very large pattern dimension or, often, both. In this thesis, we will concentrate on the
high dimensionality issue, applying manifold learning techniques for visualizing and analyzing
such patterns.
The core technique will be Di usion Maps (DM) and its Anisotropic Di usion (AD) version,
introduced by Ronald R. Coifman and his school at Yale University, and of which we will give
a complete, systematic, compact and self-contained treatment. This will be done after a brief
survey of previous manifold learning methods.
The algorithmic contributions of the thesis will be centered in two computational challenges of
di usion methods: the potential high cost of the similarity matrix eigenanalysis that is needed
to define the di usion embedding coordinates, and the di culty of computing this embedding
over new patterns not available for the initial eigenanalysis. With respect to the first issue, we
will show how the AD set up can be used to skip it when looking for local models. In this case,
local patterns will be selected through a k-Nearest Neighbors search using a properly defined
local Mahalanobis distance, that enables neighbors to be found over the latent variable space
underlying the AD model while we can work directly with the observable patterns and, thus,
avoiding the potentially costly similarity matrix eigenanalysis.
The second proposed algorithm, that we will call Auto-adaptative Laplacian Pyramids (ALP),
focuses in the out-of-sample embedding extension and consists in a modification of the classical
Laplacian Pyramids (LP) method. In this new algorithm the LP iterations will be combined with
an estimate of the Leave One Out CV error, something that makes possible to directly define
during training a criterion to estimate the optimal stopping point of this iterative algorithm.
This thesis will also present several application contributions to important problems in renewable
energy and medical imaging. More precisely, we will show how DM is a good method
for dimensionality reduction of meteorological weather predictions, providing tools to visualize
and describe these data, as well as to cluster them in order to define local models.
In turn, we will apply our AD-based localized search method first to find the location in the
human body of CT scan images and then to predict wind energy ramps on both individual farms
and over the whole of Spain. We will see that, in both cases, our results improve on the current
state of the art methods.
Finally, we will compare our ALP proposal with the well-known Nyström method as well as
with LP on two large dimensional problems, the time compression of meteorological data and
the analysis of meteorological variables relevant in daily radiation forecasts. In both cases we
will show that ALP compares favorably with the other approaches for out-of-sample extension
problemsBig Data es un problema importante hoy en día, que puede ser entendido en términos de un
amplio número de patrones, una alta dimensión o, como sucede normalmente, de ambos. Esta
tesis se va a centrar en problemas de alta dimensión, aplicando técnicas de aprendizaje de
subvariedades para visualizar y analizar dichos patrones.
La técnica central será Di usion Maps (DM) y su versión anisotrópica, Anisotropic Di usion
(AD), introducida por Ronald R. Coifman y su escuela en la Universidad de Yale, la cual va a
ser tratada de manera completa, sistemática, compacta y auto-contenida. Esto se llevará a cabo
tras un breve repaso de métodos previos de aprendizaje de subvariedades.
Las contribuciones algorítmicas de esta tesis estarán centradas en dos de los grandes retos en
métodos de difusión: el potencial alto coste que tiene el análisis de autovalores de la matriz de
similitud, necesaria para definir las coordenadas embebidas; y la dificultad para calcular este
mismo embedding sobre nuevos datos que no eran accesibles cuando se realizó el análisis de
autovalores inicial. Respecto al primer tema, se mostrará cómo la aproximación AD se puede
utilizar para evitar el cálculo del embedding cuando estamos interesados en definir modelos locales.
En este caso, se seleccionarán patrones cercanos por medio de una búsqueda de vecinos
próximos (k-NN), usando como distancia una medida de Mahalanobis local que permita encontrar
vecinos sobre las variables latentes existentes bajo el modelo de AD. Todo esto se llevará
a cabo trabajando directamente sobre los patrones observables y, por tanto, evitando el costoso
cálculo que supone el cálculo de autovalores de la matriz de similitud.
El segundo algoritmo propuesto, que llamaremos Auto-adaptative Laplacian Pyramids (ALP),
se centra en la extensión del embedding para datos fuera de la muestra, y se trata de una modificación
del método denominado Laplacian Pyramids (LP). En este nuevo algoritmo, las iteraciones
de LP se combinarán con una estimación del error de Leave One Out CV, permitiendo definir
directamente durante el periodo de entrenamiento, un criterio para estimar el criterio de parada
óptimo para este método iterativo.
En esta tesis se presentarán también una serie de contribuciones de aplicación de estas técnicas
a importantes problemas en energías renovables e imágenes médicas. Más concretamente, se
muestra como DM es un buen método para reducir la dimensión de predicciones del tiempo
meteorológico, sirviendo por tanto de herramienta de visualización y descripción, así como de
clasificación de los datos con vistas a definir modelos locales sobre cada grupo descrito.
Posteriormente, se aplicará nuestro método de búsqueda localizada basado en AD tanto a la
búsqueda de la correspondiente posición de tomografías en el cuerpo humano, como para la
detección de rampas de energía eólica en parques individuales o de manera global en España.
En ambos casos se verá como los resultados obtenidos mejoran los métodos del estado del arte
actual.
Finalmente se comparará el algoritmo de ALP propuesto frente al conocido método de Nyström
y al método de LP, en dos problemas de alta dimensión: el problema de compresión temporal
de datos meteorológicos y el análisis de variables meteorológicas relevantes para la predicción
de la radiación diaria. En ambos casos se mostrará cómo ALP es comparativamente mejor que
otras aproximaciones existentes para resolver el problema de extensión del embedding a puntos
fuera de la muestr
Temporal Extension of Scale Pyramid and Spatial Pyramid Matching for Action Recognition
Historically, researchers in the field have spent a great deal of effort to
create image representations that have scale invariance and retain spatial
location information. This paper proposes to encode equivalent temporal
characteristics in video representations for action recognition. To achieve
temporal scale invariance, we develop a method called temporal scale pyramid
(TSP). To encode temporal information, we present and compare two methods
called temporal extension descriptor (TED) and temporal division pyramid (TDP)
. Our purpose is to suggest solutions for matching complex actions that have
large variation in velocity and appearance, which is missing from most current
action representations. The experimental results on four benchmark datasets,
UCF50, HMDB51, Hollywood2 and Olympic Sports, support our approach and
significantly outperform state-of-the-art methods. Most noticeably, we achieve
65.0% mean accuracy and 68.2% mean average precision on the challenging HMDB51
and Hollywood2 datasets which constitutes an absolute improvement over the
state-of-the-art by 7.8% and 3.9%, respectively
Multi-scale Analysis based Image Fusion
Image fusion provides a better view than that provided by any of the individual source images. The aim of multi-scale analysis is to find a kind of optimal representation for high dimensional information expression. Based on the nonlinear approximation, the principle and ways of image fusion are studied, and its development, current and future challenges are reviewed in this paper.The 2nd International Conference on Intelligent Systems and Image Processing 2014 (ICISIP2014), September 26-29, 2014, Nishinippon Institute of Technology, Kitakyushu, Japa
Multi-scale Analysis based Image Fusion
The 2nd International Conference on Intelligent Systems and Image Processing 2014 (ICISIP2014), September 26-29, 2014, Nishinippon Institute of Technology, Kitakyushu, JapanImage fusion provides a better view than that provided by any of the individual source images. The aim of multi-scale analysis is to find a kind of optimal representation for high dimensional information expression. Based on the nonlinear approximation, the principle and ways of image fusion are studied, and its development, current and future challenges are reviewed in this paper
Cross-convolutional-layer Pooling for Image Recognition
Recent studies have shown that a Deep Convolutional Neural Network (DCNN)
pretrained on a large image dataset can be used as a universal image
descriptor, and that doing so leads to impressive performance for a variety of
image classification tasks. Most of these studies adopt activations from a
single DCNN layer, usually the fully-connected layer, as the image
representation. In this paper, we proposed a novel way to extract image
representations from two consecutive convolutional layers: one layer is
utilized for local feature extraction and the other serves as guidance to pool
the extracted features. By taking different viewpoints of convolutional layers,
we further develop two schemes to realize this idea. The first one directly
uses convolutional layers from a DCNN. The second one applies the pretrained
CNN on densely sampled image regions and treats the fully-connected activations
of each image region as convolutional feature activations. We then train
another convolutional layer on top of that as the pooling-guidance
convolutional layer. By applying our method to three popular visual
classification tasks, we find our first scheme tends to perform better on the
applications which need strong discrimination on subtle object patterns within
small regions while the latter excels in the cases that require discrimination
on category-level patterns. Overall, the proposed method achieves superior
performance over existing ways of extracting image representations from a DCNN.Comment: Fixed typos. Journal extension of arXiv:1411.7466. Accepted to IEEE
Transactions on Pattern Analysis and Machine Intelligenc
2D Reconstruction of Small Intestine's Interior Wall
Examining and interpreting of a large number of wireless endoscopic images
from the gastrointestinal tract is a tiresome task for physicians. A practical
solution is to automatically construct a two dimensional representation of the
gastrointestinal tract for easy inspection. However, little has been done on
wireless endoscopic image stitching, let alone systematic investigation. The
proposed new wireless endoscopic image stitching method consists of two main
steps to improve the accuracy and efficiency of image registration. First, the
keypoints are extracted by Principle Component Analysis and Scale Invariant
Feature Transform (PCA-SIFT) algorithm and refined with Maximum Likelihood
Estimation SAmple Consensus (MLESAC) outlier removal to find the most reliable
keypoints. Second, the optimal transformation parameters obtained from first
step are fed to the Normalised Mutual Information (NMI) algorithm as an initial
solution. With modified Marquardt-Levenberg search strategy in a multiscale
framework, the NMI can find the optimal transformation parameters in the
shortest time. The proposed methodology has been tested on two different
datasets - one with real wireless endoscopic images and another with images
obtained from Micro-Ball (a new wireless cubic endoscopy system with six image
sensors). The results have demonstrated the accuracy and robustness of the
proposed methodology both visually and quantitatively.Comment: Journal draf
- …