175,759 research outputs found
A statistical reduced-reference method for color image quality assessment
Although color is a fundamental feature of human visual perception, it has
been largely unexplored in the reduced-reference (RR) image quality assessment
(IQA) schemes. In this paper, we propose a natural scene statistic (NSS)
method, which efficiently uses this information. It is based on the statistical
deviation between the steerable pyramid coefficients of the reference color
image and the degraded one. We propose and analyze the multivariate generalized
Gaussian distribution (MGGD) to model the underlying statistics. In order to
quantify the degradation, we develop and evaluate two measures based
respectively on the Geodesic distance between two MGGDs and on the closed-form
of the Kullback Leibler divergence. We performed an extensive evaluation of
both metrics in various color spaces (RGB, HSV, CIELAB and YCrCb) using the TID
2008 benchmark and the FRTV Phase I validation process. Experimental results
demonstrate the effectiveness of the proposed framework to achieve a good
consistency with human visual perception. Furthermore, the best configuration
is obtained with CIELAB color space associated to KLD deviation measure
On color image quality assessment using natural image statistics
Color distortion can introduce a significant damage in visual quality
perception, however, most of existing reduced-reference quality measures are
designed for grayscale images. In this paper, we consider a basic extension of
well-known image-statistics based quality assessment measures to color images.
In order to evaluate the impact of color information on the measures
efficiency, two color spaces are investigated: RGB and CIELAB. Results of an
extensive evaluation using TID 2013 benchmark demonstrates that significant
improvement can be achieved for a great number of distortion type when the
CIELAB color representation is used
Do scenario context and question order influence WTP? The application of a model of uncertain WTP to the CV of the morbidity impacts of air pollution
This paper presents a general framework for modelling responses to contingent valuation questions when respondents are uncertain about their âtrueâ WTP. These
models are applied to a contingent valuation data set recording respondentsâ WTP to avoid episodes of ill-health. Two issues are addressed. First, whether the order in
which a respondent answers a series of contingent valuation questions influences their WTP. Second, whether the context in which a good is valued (in this case the information the respondent is given concerning the cause of the ill-health episode or the policy put into place to avoid that episode) influences respondentsâ WTP.
The results of the modelling exercise suggest that neither valuation order nor the context included in the valuation scenario impact on the precision with which respondents answer the contingent valuation questions. Similarly, valuation order does not appear to influence the mean or median WTP of the sample. In contrast, it is shown that in some cases, the inclusion of richer context significantly shifts both the mean and median WTP of the sample. This result has implications for the application of benefits transfer. Since, WTP to avoid an episode of ill-health cannot be shown to be independent of the context in which it is valued, the validity of transferring benefits of avoided ill-health episodes from one policy context to another must be called into question
An evaluation of intrusive instrumental intelligibility metrics
Instrumental intelligibility metrics are commonly used as an alternative to
listening tests. This paper evaluates 12 monaural intrusive intelligibility
metrics: SII, HEGP, CSII, HASPI, NCM, QSTI, STOI, ESTOI, MIKNN, SIMI, SIIB, and
. In addition, this paper investigates the ability of
intelligibility metrics to generalize to new types of distortions and analyzes
why the top performing metrics have high performance. The intelligibility data
were obtained from 11 listening tests described in the literature. The stimuli
included Dutch, Danish, and English speech that was distorted by additive
noise, reverberation, competing talkers, pre-processing enhancement, and
post-processing enhancement. SIIB and HASPI had the highest performance
achieving a correlation with listening test scores on average of
and , respectively. The high performance of SIIB may, in part, be
the result of SIIBs developers having access to all the intelligibility data
considered in the evaluation. The results show that intelligibility metrics
tend to perform poorly on data sets that were not used during their
development. By modifying the original implementations of SIIB and STOI, the
advantage of reducing statistical dependencies between input features is
demonstrated. Additionally, the paper presents a new version of SIIB called
, which has similar performance to SIIB and HASPI,
but takes less time to compute by two orders of magnitude.Comment: Published in IEEE/ACM Transactions on Audio, Speech, and Language
Processing, 201
Recommended from our members
A tutorial on cue combination and Signal Detection Theory: Using changes in sensitivity to evaluate how observers integrate sensory information
Many sensory inputs contain multiple sources of information (âcuesâ), such as two sounds of different frequencies, or a voice heard in unison with moving lips. Often, each cue provides a separate estimate of the same physical attribute, such as the size or location of an object. An ideal observer can exploit such redundant sensory information to improve the accuracy of their perceptual judgments. For example, if each cue is modeled as an independent, Gaussian, random variable, then combining Ncues should provide up to a âN improvement in detection/discrimination sensitivity. Alternatively, a less efficient observer may base their decision on only a subset of the available information, and so gain little or no benefit from having access to multiple sources of information. Here we use Signal Detection Theory to formulate and compare various models of cue-combination, many of which are commonly used to explain empirical data. We alert the reader to the key assumptions inherent in each model, and provide formulas for deriving quantitative predictions. Code is also provided for simulating each model, allowing expected levels of measurement error to be quantified. Based on these results, it is shown that predicted sensitivity often differs surprisingly little between qualitatively distinct models of combination. This means that sensitivity alone is not sufficient for understanding decision efficiency, and the implications of this are discussed
Learning to Rank Academic Experts in the DBLP Dataset
Expert finding is an information retrieval task that is concerned with the
search for the most knowledgeable people with respect to a specific topic, and
the search is based on documents that describe people's activities. The task
involves taking a user query as input and returning a list of people who are
sorted by their level of expertise with respect to the user query. Despite
recent interest in the area, the current state-of-the-art techniques lack in
principled approaches for optimally combining different sources of evidence.
This article proposes two frameworks for combining multiple estimators of
expertise. These estimators are derived from textual contents, from
graph-structure of the citation patterns for the community of experts, and from
profile information about the experts. More specifically, this article explores
the use of supervised learning to rank methods, as well as rank aggregation
approaches, for combing all of the estimators of expertise. Several supervised
learning algorithms, which are representative of the pointwise, pairwise and
listwise approaches, were tested, and various state-of-the-art data fusion
techniques were also explored for the rank aggregation framework. Experiments
that were performed on a dataset of academic publications from the Computer
Science domain attest the adequacy of the proposed approaches.Comment: Expert Systems, 2013. arXiv admin note: text overlap with
arXiv:1302.041
Flexible Modelling of Discrete Failure Time Including Time-Varying Smooth Effects
Discrete survival models have been extended in several ways. More flexible models are obtained by including time-varying coefficients and covariates which determine the hazard rate in an additive but not further specified form. In this paper a general model is considered which comprises both types of covariate effects. An additional extension is the incorporation of smooth interaction between time and covariates. Thus in the linear predictor smooth effects of covariates which may vary across time are allowed. It is shown how simple duration models produce artefacts which may be avoided by flexible models. For the general model which includes parametric terms, time-varying coefficients in parametric terms and time-varying smooth effects estimation procedures are derived which are based on the regularized expansion of smooth effects in basis functions
Optimal Receiver Design for Diffusive Molecular Communication With Flow and Additive Noise
In this paper, we perform receiver design for a diffusive molecular
communication environment. Our model includes flow in any direction, sources of
information molecules in addition to the transmitter, and enzymes in the
propagation environment to mitigate intersymbol interference. We characterize
the mutual information between receiver observations to show how often
independent observations can be made. We derive the maximum likelihood sequence
detector to provide a lower bound on the bit error probability. We propose the
family of weighted sum detectors for more practical implementation and derive
their expected bit error probability. Under certain conditions, the performance
of the optimal weighted sum detector is shown to be equivalent to a matched
filter. Receiver simulation results show the tradeoff in detector complexity
versus achievable bit error probability, and that a slow flow in any direction
can improve the performance of a weighted sum detector.Comment: 14 pages, 7 figures, 1 appendix. To appear in IEEE Transactions on
NanoBioscience (submitted July 31, 2013, revised June 18, 2014, accepted July
7, 2014
- âŠ