866 research outputs found
Retaining Image Feature Matching Performance Under Low Light Conditions
Poor image quality in low light images may result in a reduced number of
feature matching between images. In this paper, we investigate the performance
of feature extraction algorithms in low light environments. To find an optimal
setting to retain feature matching performance in low light images, we look
into the effect of changing feature acceptance threshold for feature detector
and adding pre-processing in the form of Low Light Image Enhancement (LLIE)
prior to feature detection. We observe that even in low light images, feature
matching using traditional hand-crafted feature detectors still performs
reasonably well by lowering the threshold parameter. We also show that applying
Low Light Image Enhancement (LLIE) algorithms can improve feature matching even
more when paired with the right feature extraction algorithm.Comment: Accepted in ICCAS 2020 - 20th International Conference on Control,
Robotics, and System
PHROG: A Multimodal Feature for Place Recognition
International audienceLong-term place recognition in outdoor environments remains a challenge due to high appearance changes in the environment. The problem becomes even more difficult when the matching between two scenes has to be made with information coming from different visual sources, particularly with different spectral ranges. For instance, an infrared camera is helpful for night vision in combination with a visible camera. In this paper, we emphasize our work on testing usual feature point extractors under both constraints: repeatability across spectral ranges and long-term appearance. We develop a new feature extraction method dedicated to improve the repeatability across spectral ranges. We conduct an evaluation of feature robustness on long-term datasets coming from different imaging sources (optics, sensors size and spectral ranges) with a Bag-of-Words approach. The tests we perform demonstrate that our method brings a significant improvement on the image retrieval issue in a visual place recognition context, particularly when there is a need to associate images from various spectral ranges such as infrared and visible: we have evaluated our approach using visible, Near InfraRed (NIR), Short Wavelength InfraRed (SWIR) and Long Wavelength InfraRed (LWIR)
Image features for visual teach-and-repeat navigation in changing environments
We present an evaluation of standard image features in the context of long-term visual teach-and-repeat navigation of mobile robots, where the environment exhibits significant changes in appearance caused by seasonal weather variations and daily illumination changes. We argue that for long-term autonomous navigation, the viewpoint-, scale- and rotation- invariance of the standard feature extractors is less important than their robustness to the mid- and long-term environment appearance changes. Therefore, we focus our evaluation on the robustness of image registration to variable lighting and naturally-occurring seasonal changes. We combine detection and description components of different image extractors and evaluate their performance on five datasets collected by mobile vehicles in three different outdoor environments over the course of one year. Moreover, we propose a trainable feature descriptor based on a combination of evolutionary algorithms and Binary Robust Independent Elementary Features, which we call GRIEF (Generated BRIEF). In terms of robustness to seasonal changes, the most promising results were achieved by the SpG/CNN and the STAR/GRIEF feature, which was slightly less robust, but faster to calculate
Unsupervised learning-based approach for detecting 3D edges in depth maps
3D edge features, which represent the boundaries between different objects or surfaces in a 3D scene, are crucial for many computer vision tasks, including object recognition, tracking, and segmentation. They also have numerous real-world applications in the field of robotics, such as vision-guided grasping and manipulation of objects. To extract these features in the noisy real-world depth data, reliable 3D edge detectors are indispensable. However, currently available 3D edge detection methods are either highly parameterized or require ground truth labelling, which makes them challenging to use for practical applications. To this extent, we present a new 3D edge detection approach using unsupervised classification. Our method learns features from depth maps at three different scales using an encoder-decoder network, from which edge-specific features are extracted. These edge features are then clustered using learning to classify each point as an edge or not. The proposed method has two key benefits. First, it eliminates the need for manual fine-tuning of data-specific hyper-parameters and automatically selects threshold values for edge classification. Second, the method does not require any labelled training data, unlike many state-of-the-art methods that require supervised training with extensive hand-labelled datasets. The proposed method is evaluated on five benchmark datasets with single and multi-object scenes, and compared with four state-of-the-art edge detection methods from the literature. Results demonstrate that the proposed method achieves competitive performance, despite not using any labelled data or relying on hand-tuning of key parameters.</p
Feature-based underwater localization using an imaging sonar
The ability of an AUV to locate itself in an environment as well as to detect relevant environmental features is of key importance for navigation success. Sonars are one the most common sensing devices for underwater localization and mapping, being used to detect and identify underwater structural features. This study explores the processing and analysis of acoustic images, through the data acquired by a mechanical scanning imaging sonar, in order to extract relevant environmental features that enable location estimation. For this purpose, the performances of different state-of-the art feature extraction algorithms were evaluated. Furthermore, an improvement to the feature matching step is proposed, in order to adapt this procedure to the characteristics of acoustic images. The extracted features are then used to feed a location estimator composed of a Simultaneous Localization and Mapping algorithm implementing an Extended Kalman Filter. Several tests were performed in a structured environment and the results of the feature extraction process and localization are presented
Learning deep physiological models of affect
Feature extraction and feature selection are crucial
phases in the process of affective modeling. Both, however,
incorporate substantial limitations that hinder the development
of reliable and accurate models of affect. For the purpose of
modeling affect manifested through physiology, this paper builds
on recent advances in machine learning with deep learning
(DL) approaches. The efficiency of DL algorithms that train
artificial neural network models is tested and compared against
standard feature extraction and selection approaches followed
in the literature. Results on a game data corpus — containing
players’ physiological signals (i.e. skin conductance and blood
volume pulse) and subjective self-reports of affect — reveal that
DL outperforms manual ad-hoc feature extraction as it yields
significantly more accurate affective models. Moreover, it appears
that DL meets and even outperforms affective models that are
boosted by automatic feature selection, for several of the scenarios
examined. As the DL method is generic and applicable to any
affective modeling task, the key findings of the paper suggest
that ad-hoc feature extraction and selection — to a lesser degree
— could be bypassed.The authors would like to thank Tobias Mahlmann for his
work on the development and administration of the cluster
used to run the experiments. Special thanks for proofreading
goes to Yana Knight. Thanks also go to the Theano development
team, to all participants in our experiments, and to
Ubisoft, NSERC and Canada Research Chairs for funding.
This work is funded, in part, by the ILearnRW (project no:
318803) and the C2Learn (project no. 318480) FP7 ICT EU
projects.peer-reviewe
Detecção de vivacidade de impressões digitais baseada em software
Orientador: Roberto de Alencar LotufoDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Com o uso crescente de sistemas de autenticação por biometria nos últimos anos, a detecção de impressões digitais falsas tem se tornado cada vez mais importante. Neste trabalho, nós implementamos e comparamos várias técnicas baseadas em software para detecção de vivacidade de impressões digitais. Utilizamos como extratores de características as redes convolucionais, que foram usadas pela primeira vez nesta área, e Local Binary Patterns (LBP). As técnicas foram usadas em conjunto com redução de dimensionalidade através da Análise de Componentes Principais (PCA) e um classificador Support Vector Machine (SVM). O aumento artificial de dados foi usado de forma bem sucedida para melhorar o desempenho do classificador. Testamos uma variedade de operações de pré-processamento, tais como filtragem em frequência, equalização de contraste e filtragem da região de interesse. Graças aos computadores de alto desempenho disponíveis como serviços em nuvem, foi possível realizar uma busca extensa e automática para encontrar a melhor combinação de operações de pré-processamento, arquiteturas e hiper-parâmetros. Os experimentos foram realizados nos conjuntos de dados usados nas competições Liveness Detection nos anos de 2009, 2011 e 2013, que juntos somam quase 50.000 imagens de impressões digitais falsas e verdadeiras. Nosso melhor método atinge uma taxa média de amostras classificadas corretamente de 95,2%, o que representa uma melhora de 59% na taxa de erro quando comparado com os melhores resultados publicados anteriormenteAbstract: With the growing use of biometric authentication systems in the past years, spoof fingerprint detection has become increasingly important. In this work, we implemented and compared various techniques for software-based fingerprint liveness detection. We use as feature extractors Convolutional Networks with random weights, which are applied for the first time for this task, and Local Binary Patterns. The techniques were used in conjunction with dimensionality reduction through Principal Component Analysis (PCA) and a Support Vector Machine (SVM) classifier. Dataset Augmentation was successfully used to increase classifier¿s performance. We tested a variety of preprocessing operations such as frequency filtering, contrast equalization, and region of interest filtering. An automatic and extensive search for the best combination of preprocessing operations, architectures and hyper-parameters was made, thanks to the fast computers available as cloud services. The experiments were made on the datasets used in The Liveness Detection Competition of years 2009, 2011 and 2013 that comprise almost 50,000 real and fake fingerprints¿ images. Our best method achieves an overall rate of 95.2% of correctly classified samples - an improvement of 59% in test error when compared with the best previously published resultsMestradoEnergia EletricaMestre em Engenharia Elétric
High speed event-based visual processing in the presence of noise
Standard machine vision approaches are challenged in applications where large amounts of noisy temporal data must be processed in real-time. This work aims to develop neuromorphic event-based processing systems for such challenging, high-noise environments. The novel event-based application-focused algorithms developed are primarily designed for implementation in digital neuromorphic hardware with a focus on noise robustness, ease of implementation, operationally useful ancillary signals and processing speed in embedded systems
- …