175,759 research outputs found

    A statistical reduced-reference method for color image quality assessment

    Full text link
    Although color is a fundamental feature of human visual perception, it has been largely unexplored in the reduced-reference (RR) image quality assessment (IQA) schemes. In this paper, we propose a natural scene statistic (NSS) method, which efficiently uses this information. It is based on the statistical deviation between the steerable pyramid coefficients of the reference color image and the degraded one. We propose and analyze the multivariate generalized Gaussian distribution (MGGD) to model the underlying statistics. In order to quantify the degradation, we develop and evaluate two measures based respectively on the Geodesic distance between two MGGDs and on the closed-form of the Kullback Leibler divergence. We performed an extensive evaluation of both metrics in various color spaces (RGB, HSV, CIELAB and YCrCb) using the TID 2008 benchmark and the FRTV Phase I validation process. Experimental results demonstrate the effectiveness of the proposed framework to achieve a good consistency with human visual perception. Furthermore, the best configuration is obtained with CIELAB color space associated to KLD deviation measure

    On color image quality assessment using natural image statistics

    Full text link
    Color distortion can introduce a significant damage in visual quality perception, however, most of existing reduced-reference quality measures are designed for grayscale images. In this paper, we consider a basic extension of well-known image-statistics based quality assessment measures to color images. In order to evaluate the impact of color information on the measures efficiency, two color spaces are investigated: RGB and CIELAB. Results of an extensive evaluation using TID 2013 benchmark demonstrates that significant improvement can be achieved for a great number of distortion type when the CIELAB color representation is used

    Do scenario context and question order influence WTP? The application of a model of uncertain WTP to the CV of the morbidity impacts of air pollution

    Get PDF
    This paper presents a general framework for modelling responses to contingent valuation questions when respondents are uncertain about their ‘true’ WTP. These models are applied to a contingent valuation data set recording respondents’ WTP to avoid episodes of ill-health. Two issues are addressed. First, whether the order in which a respondent answers a series of contingent valuation questions influences their WTP. Second, whether the context in which a good is valued (in this case the information the respondent is given concerning the cause of the ill-health episode or the policy put into place to avoid that episode) influences respondents’ WTP. The results of the modelling exercise suggest that neither valuation order nor the context included in the valuation scenario impact on the precision with which respondents answer the contingent valuation questions. Similarly, valuation order does not appear to influence the mean or median WTP of the sample. In contrast, it is shown that in some cases, the inclusion of richer context significantly shifts both the mean and median WTP of the sample. This result has implications for the application of benefits transfer. Since, WTP to avoid an episode of ill-health cannot be shown to be independent of the context in which it is valued, the validity of transferring benefits of avoided ill-health episodes from one policy context to another must be called into question

    An evaluation of intrusive instrumental intelligibility metrics

    Full text link
    Instrumental intelligibility metrics are commonly used as an alternative to listening tests. This paper evaluates 12 monaural intrusive intelligibility metrics: SII, HEGP, CSII, HASPI, NCM, QSTI, STOI, ESTOI, MIKNN, SIMI, SIIB, and sEPSMcorr\text{sEPSM}^\text{corr}. In addition, this paper investigates the ability of intelligibility metrics to generalize to new types of distortions and analyzes why the top performing metrics have high performance. The intelligibility data were obtained from 11 listening tests described in the literature. The stimuli included Dutch, Danish, and English speech that was distorted by additive noise, reverberation, competing talkers, pre-processing enhancement, and post-processing enhancement. SIIB and HASPI had the highest performance achieving a correlation with listening test scores on average of ρ=0.92\rho=0.92 and ρ=0.89\rho=0.89, respectively. The high performance of SIIB may, in part, be the result of SIIBs developers having access to all the intelligibility data considered in the evaluation. The results show that intelligibility metrics tend to perform poorly on data sets that were not used during their development. By modifying the original implementations of SIIB and STOI, the advantage of reducing statistical dependencies between input features is demonstrated. Additionally, the paper presents a new version of SIIB called SIIBGauss\text{SIIB}^\text{Gauss}, which has similar performance to SIIB and HASPI, but takes less time to compute by two orders of magnitude.Comment: Published in IEEE/ACM Transactions on Audio, Speech, and Language Processing, 201

    Learning to Rank Academic Experts in the DBLP Dataset

    Full text link
    Expert finding is an information retrieval task that is concerned with the search for the most knowledgeable people with respect to a specific topic, and the search is based on documents that describe people's activities. The task involves taking a user query as input and returning a list of people who are sorted by their level of expertise with respect to the user query. Despite recent interest in the area, the current state-of-the-art techniques lack in principled approaches for optimally combining different sources of evidence. This article proposes two frameworks for combining multiple estimators of expertise. These estimators are derived from textual contents, from graph-structure of the citation patterns for the community of experts, and from profile information about the experts. More specifically, this article explores the use of supervised learning to rank methods, as well as rank aggregation approaches, for combing all of the estimators of expertise. Several supervised learning algorithms, which are representative of the pointwise, pairwise and listwise approaches, were tested, and various state-of-the-art data fusion techniques were also explored for the rank aggregation framework. Experiments that were performed on a dataset of academic publications from the Computer Science domain attest the adequacy of the proposed approaches.Comment: Expert Systems, 2013. arXiv admin note: text overlap with arXiv:1302.041

    Flexible Modelling of Discrete Failure Time Including Time-Varying Smooth Effects

    Get PDF
    Discrete survival models have been extended in several ways. More flexible models are obtained by including time-varying coefficients and covariates which determine the hazard rate in an additive but not further specified form. In this paper a general model is considered which comprises both types of covariate effects. An additional extension is the incorporation of smooth interaction between time and covariates. Thus in the linear predictor smooth effects of covariates which may vary across time are allowed. It is shown how simple duration models produce artefacts which may be avoided by flexible models. For the general model which includes parametric terms, time-varying coefficients in parametric terms and time-varying smooth effects estimation procedures are derived which are based on the regularized expansion of smooth effects in basis functions

    Optimal Receiver Design for Diffusive Molecular Communication With Flow and Additive Noise

    Full text link
    In this paper, we perform receiver design for a diffusive molecular communication environment. Our model includes flow in any direction, sources of information molecules in addition to the transmitter, and enzymes in the propagation environment to mitigate intersymbol interference. We characterize the mutual information between receiver observations to show how often independent observations can be made. We derive the maximum likelihood sequence detector to provide a lower bound on the bit error probability. We propose the family of weighted sum detectors for more practical implementation and derive their expected bit error probability. Under certain conditions, the performance of the optimal weighted sum detector is shown to be equivalent to a matched filter. Receiver simulation results show the tradeoff in detector complexity versus achievable bit error probability, and that a slow flow in any direction can improve the performance of a weighted sum detector.Comment: 14 pages, 7 figures, 1 appendix. To appear in IEEE Transactions on NanoBioscience (submitted July 31, 2013, revised June 18, 2014, accepted July 7, 2014
    • 

    corecore