2,639 research outputs found
Compressive Source Separation: Theory and Methods for Hyperspectral Imaging
With the development of numbers of high resolution data acquisition systems
and the global requirement to lower the energy consumption, the development of
efficient sensing techniques becomes critical. Recently, Compressed Sampling
(CS) techniques, which exploit the sparsity of signals, have allowed to
reconstruct signal and images with less measurements than the traditional
Nyquist sensing approach. However, multichannel signals like Hyperspectral
images (HSI) have additional structures, like inter-channel correlations, that
are not taken into account in the classical CS scheme. In this paper we exploit
the linear mixture of sources model, that is the assumption that the
multichannel signal is composed of a linear combination of sources, each of
them having its own spectral signature, and propose new sampling schemes
exploiting this model to considerably decrease the number of measurements
needed for the acquisition and source separation. Moreover, we give theoretical
lower bounds on the number of measurements required to perform reconstruction
of both the multichannel signal and its sources. We also proposed optimization
algorithms and extensive experimentation on our target application which is
HSI, and show that our approach recovers HSI with far less measurements and
computational effort than traditional CS approaches.Comment: 32 page
Exploiting Deep Features for Remote Sensing Image Retrieval: A Systematic Investigation
Remote sensing (RS) image retrieval is of great significant for geological
information mining. Over the past two decades, a large amount of research on
this task has been carried out, which mainly focuses on the following three
core issues: feature extraction, similarity metric and relevance feedback. Due
to the complexity and multiformity of ground objects in high-resolution remote
sensing (HRRS) images, there is still room for improvement in the current
retrieval approaches. In this paper, we analyze the three core issues of RS
image retrieval and provide a comprehensive review on existing methods.
Furthermore, for the goal to advance the state-of-the-art in HRRS image
retrieval, we focus on the feature extraction issue and delve how to use
powerful deep representations to address this task. We conduct systematic
investigation on evaluating correlative factors that may affect the performance
of deep features. By optimizing each factor, we acquire remarkable retrieval
results on publicly available HRRS datasets. Finally, we explain the
experimental phenomenon in detail and draw conclusions according to our
analysis. Our work can serve as a guiding role for the research of
content-based RS image retrieval
Fuzzy spectral and spatial feature integration for classification of nonferrous materials in hyperspectral data
Hyperspectral data allows the construction of more elaborate models to sample the properties of the nonferrous materials than the standard RGB color representation. In this paper, the nonferrous waste materials are studied as they cannot be sorted by classical procedures due to their color, weight and shape similarities. The experimental results presented in this paper reveal that factors such as the various levels of oxidization of the waste materials and the slight differences in their chemical composition preclude the use of the spectral features in a simplistic manner for robust material classification. To address these problems, the proposed FUSSER (fuzzy spectral and spatial classifier) algorithm detailed in this paper merges the spectral and spatial features to obtain a combined feature vector that is able to better sample the properties of the nonferrous materials than the single pixel spectral features when applied to the construction of multivariate Gaussian distributions. This approach allows the implementation of statistical region merging techniques in order to increase the performance of the classification process. To achieve an efficient implementation, the dimensionality of the hyperspectral data is reduced by constructing bio-inspired spectral fuzzy sets that minimize the amount of redundant information contained in adjacent hyperspectral bands. The experimental results indicate that the proposed algorithm increased the overall classification rate from 44% using RGB data up to 98% when the spectral-spatial features are used for nonferrous material classification
Process of image super-resolution
In this paper we explain a process of super-resolution reconstruction
allowing to increase the resolution of an image.The need for high-resolution
digital images exists in diverse domains, for example the medical and spatial
domains. The obtaining of high-resolution digital images can be made at the
time of the shooting, but it is often synonymic of important costs because of
the necessary material to avoid such costs, it is known how to use methods of
super-resolution reconstruction, consisting from one or several low resolution
images to obtain a high-resolution image. The american patent US 9208537
describes such an algorithm. A zone of one low-resolution image is isolated and
categorized according to the information contained in pixels forming the
borders of the zone. The category of it zone determines the type of
interpolation used to add pixels in aforementioned zone, to increase the
neatness of the images. It is also known how to reconstruct a low-resolution
image there high-resolution image by using a model of super-resolution
reconstruction whose learning is based on networks of neurons and on image or a
picture library. The demand of chinese patent CN 107563965 and the scientist
publication "Pixel Recursive Super Resolution", R. Dahl, M. Norouzi, J. Shlens
propose such methods. The aim of this paper is to demonstrate that it is
possible to reconstruct coherent human faces from very degraded pixelated
images with a very fast algorithm, more faster than compressed sensing (CS),
easier to compute and without deep learning, so without important technology
resources, i.e. a large database of thousands training images (see
arXiv:2003.13063).
This technological breakthrough has been patented in 2018 with the demand of
French patent FR 1855485 (https://patents.google.com/patent/FR3082980A1, see
the HAL reference https://hal.archives-ouvertes.fr/hal-01875898v1).Comment: 19 pages, 10 figure
Can we ID from CCTV? Image quality in digital CCTV and face identification performance
CCTV is used for an increasing number Of purposes, and the new generation of digital systems can be tailored to serve a wide range of security requirements. However, configuration decisions are often made without considering specific task requirements, e.g. the video quality needed for reliable person identification. Our Study investigated the relationship between video quality and the ability of untrained viewers to identify faces from digital CCTV images. The task required 80 participants to identify 64 faces belonging to 4 different ethnicities. Participants compared face images taken from a high quality photographs and low quality CCTV stills, which were recorded at 4 different video quality bit rates (32, 52, 72 and 92 Kbps). We found that the number of correct identifications decreased by 12 (similar to 18%) as MPEG-4 quality decreased from 92 to 32 Kbps, and by 4 (similar to 6%) as Wavelet video quality decreased from 92 to 32 Kbps. To achieve reliable and effective face identification, we recommend that MPEG-4 CCTV systems should be used over Wavelet, and video quality should not be lowered below 52 Kbps during video compression. We discuss the practical implications of these results for security, and contribute a contextual methodology for assessing CCTV video quality
Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals
An electroencephalography (EEG) based Brain Computer Interface (BCI) enables
people to communicate with the outside world by interpreting the EEG signals of
their brains to interact with devices such as wheelchairs and intelligent
robots. More specifically, motor imagery EEG (MI-EEG), which reflects a
subjects active intent, is attracting increasing attention for a variety of BCI
applications. Accurate classification of MI-EEG signals while essential for
effective operation of BCI systems, is challenging due to the significant noise
inherent in the signals and the lack of informative correlation between the
signals and brain activities. In this paper, we propose a novel deep neural
network based learning framework that affords perceptive insights into the
relationship between the MI-EEG data and brain activities. We design a joint
convolutional recurrent neural network that simultaneously learns robust
high-level feature presentations through low-dimensional dense embeddings from
raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various
artifacts such as background activities. The proposed approach has been
evaluated extensively on a large- scale public MI-EEG dataset and a limited but
easy-to-deploy dataset collected in our lab. The results show that our approach
outperforms a series of baselines and the competitive state-of-the- art
methods, yielding a classification accuracy of 95.53%. The applicability of our
proposed approach is further demonstrated with a practical BCI system for
typing.Comment: 10 page
- …