7,495 research outputs found
Quantifying image distortion based on Gabor filter bank and multiple regression analysis
Image quality assessment is indispensable for image-based applications. The approaches towards image quality assessment fall into two main categories: subjective and objective methods. Subjective assessment has been widely used. However, careful subjective assessments are experimentally difficult and lengthy, and the results obtained may vary depending on the test conditions. On the other hand, objective image quality assessment would not only alleviate the difficulties described above but would also help to expand the application field. Therefore, several works have been developed for quantifying the distortion presented on a image achieving goodness of fit between subjective and objective scores up to 92%. Nevertheless, current methodologies are designed assuming that the nature of the distortion is known. Generally, this is a limiting assumption for practical applications, since in a majority of cases the distortions in the image are unknown. Therefore, we believe that the current methods of image quality assessment should be adapted in order to identify and quantify the distortion of images at the same time. That combination can improve processes such as enhancement, restoration, compression, transmission, among others. We present an approach based on the power of the experimental design and the joint localization of the Gabor filters for studying the influence of the spatial/frequencies on image quality assessment. Therefore, we achieve a correct identification and quantification of the distortion affecting images. This method provides accurate scores and differentiability between distortions
Forecasting Time Series with VARMA Recursions on Graphs
Graph-based techniques emerged as a choice to deal with the dimensionality
issues in modeling multivariate time series. However, there is yet no complete
understanding of how the underlying structure could be exploited to ease this
task. This work provides contributions in this direction by considering the
forecasting of a process evolving over a graph. We make use of the
(approximate) time-vertex stationarity assumption, i.e., timevarying graph
signals whose first and second order statistical moments are invariant over
time and correlated to a known graph topology. The latter is combined with VAR
and VARMA models to tackle the dimensionality issues present in predicting the
temporal evolution of multivariate time series. We find out that by projecting
the data to the graph spectral domain: (i) the multivariate model estimation
reduces to that of fitting a number of uncorrelated univariate ARMA models and
(ii) an optimal low-rank data representation can be exploited so as to further
reduce the estimation costs. In the case that the multivariate process can be
observed at a subset of nodes, the proposed models extend naturally to Kalman
filtering on graphs allowing for optimal tracking. Numerical experiments with
both synthetic and real data validate the proposed approach and highlight its
benefits over state-of-the-art alternatives.Comment: submitted to the IEEE Transactions on Signal Processin
Kepler Mission Stellar and Instrument Noise Properties
Kepler Mission results are rapidly contributing to fundamentally new
discoveries in both the exoplanet and asteroseismology fields. The data
returned from Kepler are unique in terms of the number of stars observed,
precision of photometry for time series observations, and the temporal extent
of high duty cycle observations. As the first mission to provide extensive time
series measurements on thousands of stars over months to years at a level
hitherto possible only for the Sun, the results from Kepler will vastly
increase our knowledge of stellar variability for quiet solar-type stars. Here
we report on the stellar noise inferred on the timescale of a few hours of most
interest for detection of exoplanets via transits. By design the data from
moderately bright Kepler stars are expected to have roughly comparable levels
of noise intrinsic to the stars and arising from a combination of fundamental
limitations such as Poisson statistics and any instrument noise. The noise
levels attained by Kepler on-orbit exceed by some 50% the target levels for
solar-type, quiet stars. We provide a decomposition of observed noise for an
ensemble of 12th magnitude stars arising from fundamental terms (Poisson and
readout noise), added noise due to the instrument and that intrinsic to the
stars. The largest factor in the modestly higher than anticipated noise follows
from intrinsic stellar noise. We show that using stellar parameters from
galactic stellar synthesis models, and projections to stellar rotation,
activity and hence noise levels reproduces the primary intrinsic stellar noise
features.Comment: Accepted by ApJ; 26 pages, 20 figure
An objective based classification of aggregation techniques for wireless sensor networks
Wireless Sensor Networks have gained immense popularity in recent years due to their ever increasing capabilities and wide range of critical applications. A huge body of research efforts has been dedicated to find ways to utilize limited resources of these sensor nodes in an efficient manner. One of the common ways to minimize energy consumption has been aggregation of input data. We note that every aggregation technique has an improvement objective to achieve with respect to the output it produces. Each technique is designed to achieve some target e.g. reduce data size, minimize transmission energy, enhance accuracy etc. This paper presents a comprehensive survey of aggregation techniques that can be used in distributed manner to improve lifetime and energy conservation of wireless sensor networks. Main contribution of this work is proposal of a novel classification of such techniques based on the type of improvement they offer when applied to WSNs. Due to the existence of a myriad of definitions of aggregation, we first review the meaning of term aggregation that can be applied to WSN. The concept is then associated with the proposed classes. Each class of techniques is divided into a number of subclasses and a brief literature review of related work in WSN for each of these is also presented
Recent Progress in Image Deblurring
This paper comprehensively reviews the recent development of image
deblurring, including non-blind/blind, spatially invariant/variant deblurring
techniques. Indeed, these techniques share the same objective of inferring a
latent sharp image from one or several corresponding blurry images, while the
blind deblurring techniques are also required to derive an accurate blur
kernel. Considering the critical role of image restoration in modern imaging
systems to provide high-quality images under complex environments such as
motion, undesirable lighting conditions, and imperfect system components, image
deblurring has attracted growing attention in recent years. From the viewpoint
of how to handle the ill-posedness which is a crucial issue in deblurring
tasks, existing methods can be grouped into five categories: Bayesian inference
framework, variational methods, sparse representation-based methods,
homography-based modeling, and region-based methods. In spite of achieving a
certain level of development, image deblurring, especially the blind case, is
limited in its success by complex application conditions which make the blur
kernel hard to obtain and be spatially variant. We provide a holistic
understanding and deep insight into image deblurring in this review. An
analysis of the empirical evidence for representative methods, practical
issues, as well as a discussion of promising future directions are also
presented.Comment: 53 pages, 17 figure
Accurate and Efficient Filtering using Anisotropic Filter Decomposition
Efficient filtering remains an important challenge in computer graphics, particularly when filters are spatially-varying, have large extent, and/or exhibit complex anisotropic profiles. We present an efficient filtering approach for these difficult cases based on anisotropic filter decomposition (IFD). By decomposing complex filters into linear combinations of simpler, displaced isotropic kernels, and precomputing a compact prefiltered dataset, we are able to interactively apply any number of---potentially transformed---filters to a signal. Our performance scales linearly with the size of the decomposition, not the size nor the dimensionality of the filter, and our prefiltered data requires reasonnable storage, comparing favorably to the state-of-the-art. We apply IFD to interesting problems in image processing and realistic rendering.Les opérations de filtrage en synthèse/analyse d'images sont coûteuses à effectuer lorsque les filtres varient spatialement, sont très étendus et/ou très anisotropes. Nous présentons dans ce cas précis une méthode pour rendre le filtrage efficace, basée sur une décomposition du filtre en une combinaison linéaire de filtres isotropes, en translation. Le coût de notre méthode est linéaire par rapport au nombre de filtres utilisés dans la décomposition, et ne dépend pas de la taille des données filtrées. Nous en présentons différentes applications, en analyses d'images et en rendu
Multifrequency Aperture-Synthesizing Microwave Radiometer System (MFASMR). Volume 1
Background material and a systems analysis of a multifrequency aperture - synthesizing microwave radiometer system is presented. It was found that the system does not exhibit high performance because much of the available thermal power is not used in the construction of the image and because the image that can be formed has a resolution of only ten lines. An analysis of image reconstruction is given. The system is compared with conventional aperture synthesis systems
DeepSphere: Efficient spherical Convolutional Neural Network with HEALPix sampling for cosmological applications
Convolutional Neural Networks (CNNs) are a cornerstone of the Deep Learning
toolbox and have led to many breakthroughs in Artificial Intelligence. These
networks have mostly been developed for regular Euclidean domains such as those
supporting images, audio, or video. Because of their success, CNN-based methods
are becoming increasingly popular in Cosmology. Cosmological data often comes
as spherical maps, which make the use of the traditional CNNs more complicated.
The commonly used pixelization scheme for spherical maps is the Hierarchical
Equal Area isoLatitude Pixelisation (HEALPix). We present a spherical CNN for
analysis of full and partial HEALPix maps, which we call DeepSphere. The
spherical CNN is constructed by representing the sphere as a graph. Graphs are
versatile data structures that can act as a discrete representation of a
continuous manifold. Using the graph-based representation, we define many of
the standard CNN operations, such as convolution and pooling. With filters
restricted to being radial, our convolutions are equivariant to rotation on the
sphere, and DeepSphere can be made invariant or equivariant to rotation. This
way, DeepSphere is a special case of a graph CNN, tailored to the HEALPix
sampling of the sphere. This approach is computationally more efficient than
using spherical harmonics to perform convolutions. We demonstrate the method on
a classification problem of weak lensing mass maps from two cosmological models
and compare the performance of the CNN with that of two baseline classifiers.
The results show that the performance of DeepSphere is always superior or equal
to both of these baselines. For high noise levels and for data covering only a
smaller fraction of the sphere, DeepSphere achieves typically 10% better
classification accuracy than those baselines. Finally, we show how learned
filters can be visualized to introspect the neural network.Comment: arXiv admin note: text overlap with arXiv:astro-ph/0409513 by other
author
- …