131,248 research outputs found
Feature point extraction using scale-space representation
An algorithm for feature point extraction is presented. It is based on a scale-space representation of the image as well as a system for tracking across scales. Using synthetic and real images, it is shown that the proposed algorithm produces stable and well-localized feature points estimates two essential properties for video applications
Multifaceted 4D Feature Segmentation and Extraction in Point and Field-based Datasets
The use of large-scale multifaceted data is common in a wide variety of
scientific applications. In many cases, this multifaceted data takes the form
of a field-based (Eulerian) and point/trajectory-based (Lagrangian)
representation as each has a unique set of advantages in characterizing a
system of study. Furthermore, studying the increasing scale and complexity of
these multifaceted datasets is limited by perceptual ability and available
computational resources, necessitating sophisticated data reduction and feature
extraction techniques. In this work, we present a new 4D feature
segmentation/extraction scheme that can operate on both the field and
point/trajectory data types simultaneously. The resulting features are
time-varying data subsets that have both a field and point-based component, and
were extracted based on underlying patterns from both data types. This enables
researchers to better explore both the spatial and temporal interplay between
the two data representations and study underlying phenomena from new
perspectives. We parallelize our approach using GPU acceleration and apply it
to real world multifaceted datasets to illustrate the types of features that
can be extracted and explored
Offline Signature-Based Fuzzy Vault (OSFV: Review and New Results
An offline signature-based fuzzy vault (OSFV) is a bio-cryptographic
implementation that uses handwritten signature images as biometrics instead of
traditional passwords to secure private cryptographic keys. Having a reliable
OSFV implementation is the first step towards automating financial and legal
authentication processes, as it provides greater security of confidential
documents by means of the embedded handwritten signatures. The authors have
recently proposed the first OSFV implementation which is reviewed in this
paper. In this system, a machine learning approach based on the dissimilarity
representation concept is employed to select a reliable feature representation
adapted for the fuzzy vault scheme. Some variants of this system are proposed
for enhanced accuracy and security. In particular, a new method that adapts
user key size is presented. Performance of proposed methods are compared using
the Brazilian PUCPR and GPDS signature databases and results indicate that the
key-size adaptation method achieves a good compromise between security and
accuracy. While average system entropy is increased from 45-bits to about
51-bits, the AER (average error rate) is decreased by about 21%.Comment: This paper has been submitted to The 2014 IEEE Symposium on
Computational Intelligence in Biometrics and Identity Management (CIBIM
Optimal Representation of Anuran Call Spectrum in Environmental Monitoring Systems Using Wireless Sensor Networks
The analysis and classification of the sounds produced by certain animal species, notably anurans, have revealed these amphibians to be a potentially strong indicator of temperature fluctuations and therefore of the existence of climate change. Environmental monitoring systems using Wireless Sensor Networks are therefore of interest to obtain indicators of global warming. For the automatic classification of the sounds recorded on such systems, the proper representation of the sound spectrum is essential since it contains the information required for cataloguing anuran calls. The present paper focuses on this process of feature extraction by exploring three alternatives: the standardized MPEG-7, the Filter Bank Energy (FBE), and the Mel Frequency Cepstral Coefficients (MFCC). Moreover, various values for every option in the extraction of spectrum features have been considered. Throughout the paper, it is shown that representing the frame spectrum with pure FBE offers slightly worse results than using the MPEG-7 features. This performance can easily be increased, however, by rescaling the FBE in a double dimension: vertically, by taking the logarithm of the energies; and, horizontally, by applying mel scaling in the filter banks. On the other hand, representing the spectrum in the cepstral domain, as in MFCC, has shown additional marginal improvements in classification performance.University of Seville: Telefónica Chair "Intelligence Networks
Augmented Semantic Signatures of Airborne LiDAR Point Clouds for Comparison
LiDAR point clouds provide rich geometric information, which is particularly
useful for the analysis of complex scenes of urban regions. Finding structural
and semantic differences between two different three-dimensional point clouds,
say, of the same region but acquired at different time instances is an
important problem. A comparison of point clouds involves computationally
expensive registration and segmentation. We are interested in capturing the
relative differences in the geometric uncertainty and semantic content of the
point cloud without the registration process. Hence, we propose an
orientation-invariant geometric signature of the point cloud, which integrates
its probabilistic geometric and semantic classifications. We study different
properties of the geometric signature, which are an image-based encoding of
geometric uncertainty and semantic content. We explore different metrics to
determine differences between these signatures, which in turn compare point
clouds without performing point-to-point registration. Our results show that
the differences in the signatures corroborate with the geometric and semantic
differences of the point clouds.Comment: 18 pages, 6 figures, 1 tabl
Video Data Visualization System: Semantic Classification And Personalization
We present in this paper an intelligent video data visualization tool, based
on semantic classification, for retrieving and exploring a large scale corpus
of videos. Our work is based on semantic classification resulting from semantic
analysis of video. The obtained classes will be projected in the visualization
space. The graph is represented by nodes and edges, the nodes are the keyframes
of video documents and the edges are the relation between documents and the
classes of documents. Finally, we construct the user's profile, based on the
interaction with the system, to render the system more adequate to its
references.Comment: graphic
Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review
Recently, the advancement of deep learning in discriminative feature learning
from 3D LiDAR data has led to rapid development in the field of autonomous
driving. However, automated processing uneven, unstructured, noisy, and massive
3D point clouds is a challenging and tedious task. In this paper, we provide a
systematic review of existing compelling deep learning architectures applied in
LiDAR point clouds, detailing for specific tasks in autonomous driving such as
segmentation, detection, and classification. Although several published
research papers focus on specific topics in computer vision for autonomous
vehicles, to date, no general survey on deep learning applied in LiDAR point
clouds for autonomous vehicles exists. Thus, the goal of this paper is to
narrow the gap in this topic. More than 140 key contributions in the recent
five years are summarized in this survey, including the milestone 3D deep
architectures, the remarkable deep learning applications in 3D semantic
segmentation, object detection, and classification; specific datasets,
evaluation metrics, and the state of the art performance. Finally, we conclude
the remaining challenges and future researches.Comment: 21 pages, submitted to IEEE Transactions on Neural Networks and
Learning System
Forecasting with time series imaging
Feature-based time series representations have attracted substantial
attention in a wide range of time series analysis methods. Recently, the use of
time series features for forecast model averaging has been an emerging research
focus in the forecasting community. Nonetheless, most of the existing
approaches depend on the manual choice of an appropriate set of features.
Exploiting machine learning methods to extract features from time series
automatically becomes crucial in state-of-the-art time series analysis. In this
paper, we introduce an automated approach to extract time series features based
on time series imaging. We first transform time series into recurrence plots,
from which local features can be extracted using computer vision algorithms.
The extracted features are used for forecast model averaging. Our experiments
show that forecasting based on automatically extracted features, with less
human intervention and a more comprehensive view of the raw time series data,
yields highly comparable performances with the best methods in the largest
forecasting competition dataset (M4) and outperforms the top methods in the
Tourism forecasting competition dataset
Machine Learning Techniques and Applications For Ground-based Image Analysis
Ground-based whole sky cameras have opened up new opportunities for
monitoring the earth's atmosphere. These cameras are an important complement to
satellite images by providing geoscientists with cheaper, faster, and more
localized data. The images captured by whole sky imagers can have high spatial
and temporal resolution, which is an important pre-requisite for applications
such as solar energy modeling, cloud attenuation analysis, local weather
prediction, etc.
Extracting valuable information from the huge amount of image data by
detecting and analyzing the various entities in these images is challenging.
However, powerful machine learning techniques have become available to aid with
the image analysis. This article provides a detailed walk-through of recent
developments in these techniques and their applications in ground-based
imaging. We aim to bridge the gap between computer vision and remote sensing
with the help of illustrative examples. We demonstrate the advantages of using
machine learning techniques in ground-based image analysis via three primary
applications -- segmentation, classification, and denoising
Surface networks
© Copyright CASA, UCL. The desire to understand and exploit the structure of continuous surfaces is common to researchers in a range of disciplines. Few examples of the varied surfaces forming an integral part of modern subjects include terrain, population density, surface atmospheric pressure, physico-chemical surfaces, computer graphics, and metrological surfaces. The focus of the work here is a group of data structures called Surface Networks, which abstract 2-dimensional surfaces by storing only the most important (also called fundamental, critical or surface-specific) points and lines in the surfaces. Surface networks are intelligent and “natural ” data structures because they store a surface as a framework of “surface ” elements unlike the DEM or TIN data structures. This report presents an overview of the previous works and the ideas being developed by the authors of this report. The research on surface networks has fou
- …