3,851 research outputs found

    Psychometric scaling of TID2013 dataset

    Get PDF
    TID2013 is a subjective image quality assessment dataset with a wide range of distortion types and over 3000 images. The dataset has proven to be a challenging test for objective quality metrics. The dataset mean opinion scores were obtained by collecting pairwise comparison judgments using the Swiss tournament system, and averaging votes of observers. However, this approach differs from the usual analysis of multiple pairwise comparisons, which involves psychometric scaling of the comparison data using either Thurstone or Bradley-Terry mod- els. In this paper we investigate how quality scores change when they are computed using such psychometric scaling instead of averaging vote counts. In order to properly scale TID2013 quality scores, we conduct four additional experiments of two different types, which we found necessary to produce a common quality scale: comparisons with reference images, and cross-content comparisons. We demonstrate on a fifth validation experiment that the two additional types of comparisons are necessary and in conjunction with psychometric scaling improve the consistency of quality scores, especially across images depicting different contents

    Trained Perceptual Transform for Quality Assessment of High Dynamic Range Images and Video

    Get PDF
    In this paper, we propose a trained perceptually transform for quality assessment of high dynamic range (HDR) images and video. The transform is used to convert absolute luminance values found in HDR images into perceptually uniform units, which can be used with any standard-dynamic-range metric. The new transform is derived by fitting the parameters of a previously proposed perceptual encoding function to 4 different HDR subjective quality assessment datasets using Bayesian optimization. The new transform combined with a simple peak signal-to-noise ratio measure achieves better prediction performance in cross-dataset validation than existing transforms. We provide Matlab code for our metri

    Computational modelling of COVID-19: A study of compliance and superspreaders

    Get PDF
    Background: The success of social distancing implementations of severe acute respiratory syndrome coronavirus 2 (SARS-Cov-2) depends heavily on population compliance. Mathematical modelling has been used extensively to assess the rate of viral transmission from behavioural responses. Previous epidemics of SARS-Cov-2 have been characterised by superspreaders, a small number of individuals who transmit a disease to a large group of individuals, who contribute to the stochasticity (or randomness) of transmission compared to other pathogens such as Influenza. This growing evidence proves an urgent matter to understand transmission routes in order to target and combat outbreaks. / Objective: To investigate the role of superspreaders in the rate of viral transmission with various levels of compliance. / Method: A SEIRS inspired social network model is adapted and calibrated to observe the infected links of a general population with and without superspreaders on four compliance levels. Local and global connection parameters are adjusted to simulate close contact networks and travel restrictions respectively and each performance assessed. The mean and standard deviation of infections with superspreaders and non-superspreaders were calculated for each compliance level. / Results: Increased levels of compliance of superspreaders proves a significant reduction in infections. Assuming long-lasting immunity, superspreaders could potentially slow down the spread due to their high connectivity. / Discussion: The main advantage of applying the network model is to capture the heterogeneity and locality of social networks, including the role of superspreaders in epidemic dynamics. The main challenge is the immediate attention on social settings with targeted interventions to tackle superspreaders in future empirical work. / Conclusion: Superspreaders play a central role in slowing down infection spread following compliance guidelines. It is crucial to adjust social distancing measures to prevent future outbreaks accompanied by population-wide testing and effective tracing

    A dandelion-encoded evolutionary algorithm for the delay-constrained capacitated minimum spanning tree problem

    Get PDF
    This paper proposes an evolutionary algorithm with Dandelion-encoding to tackle the Delay-Constrained Capacitated Minimum Spanning Tree (DC-CMST) problem. This problem has been recently proposed, and consists of finding several broadcast trees from a source node, jointly considering traffic and delay constraints in trees. A version of the problem in which the source node is also included in the optimization process is considered as well in the paper. The Dandelion code used in the proposed evolutionary algorithm has been recently proposed as an effective way of encoding trees in evolutionary algorithms. Good properties of locality has been reported on this encoding, which makes it very effective to solve problems in which the solutions can be expressed in form of trees. In the paper we describe the main characteristics of the algorithm, the implementation of the Dandelion-encoding to tackled the DC-CMST problem and a modification needed to include the source node in the optimization. In the experimental section of this article we compare the results obtained by our evolutionary with that of a recently proposed heuristic for the DC-CMST. the Least Cost (LC) algorithm. We show that our Dandelion-encoded evolutionary algorithm is able to obtain better results that the LC in all the instances tackled. (C) 2008 Elsevier B.V. All rights reserved

    Psychometric scaling of TID2013 dataset

    Get PDF
    TID2013 is a subjective image quality assessment dataset with a wide range of distortion types and over 3000 images. The dataset has proven to be a challenging test for objective quality metrics. The dataset mean opinion scores were obtained by collecting pairwise comparison judgments using the Swiss tournament system, and averaging votes of observers. However, this approach differs from the usual analysis of multiple pairwise comparisons, which involves psychometric scaling of the comparison data using either Thurstone or Bradley-Terry models. In this paper we investigate how quality scores change when they are computed using such psychometric scaling instead of averaging vote counts. In order to properly scale TID2013 quality scores, we conduct four additional experiments of two different types, which we found necessary to produce a common quality scale: comparisons with reference images, and cross-content comparisons. We demonstrate on a fifth validation experiment that the two additional types of comparisons are necessary and in conjunction with psychometric scaling improve the consistency of quality scores, especially across images depicting different contents

    Trained Perceptual Transform for Quality Assessment of High Dynamic Range Images and Video

    Get PDF
    In this paper, we propose a trained perceptually transform for quality assessment of high dynamic range (HDR) images and video. The transform is used to convert absolute luminance values found in HDR images into perceptually uniform units, which can be used with any standard-dynamic-range metric. The new transform is derived by fitting the parameters of a previously proposed perceptual encoding function to 4 different HDR subjective quality assessment datasets using Bayesian optimization. The new transform combined with a simple peak signal-to-noise ratio measure achieves better prediction performance in cross-dataset validation than existing transforms. We provide Matlab code for our metric 1

    Integrating Lean Six Sigma and discrete-event simulation for shortening the appointment lead-time in gynecobstetrics departments: a case study

    Get PDF
    Long waiting time to appointment may be a worry for pregnant women, particularly those who need perinatology consultation since it could increase anxiety and, in a worst case scenario, lead to an increase in fetal, infant, and maternal mortality. Treatment costs may also increase since pregnant women with diverse pathologies can develop more severe complications. As a step towards improving this process, we propose a methodological approach to reduce the appointment lead-time in outpatient gynecobstetrics departments. This framework involves combining the Six Sigma method to identify defects in the appointment scheduling process with a discrete-event simulation (DES) to evaluate the potential success of removing such defects in simulation before we resort to changing the real-world healthcare system. To do these, we initially characterize the gynecobstetrics department using a SIPOC diagram. Then, six sigma performance metrics are calculated to evaluate how well the department meets the government target in relation to the appointment lead-time. Afterwards, a cause-and-effect analysis is undertaken to identify potential causes of appointment lead-time variation. These causes are later validated through ANOVA, regression analysis, and DES. Improvement scenarios are next designed and pretested through computer simulation models. Finally, control plans are deployed to maintain the results achieved through the implementation of the DES-Six sigma approach. The aforementioned framework was validated in a public gynecobstetrics outpatient department. The results revealed that mean waiting time decreased from 6.9 days to 4.1 days while variance passed from 2.46 days2 to 1.53 days2

    Serum levels and removal by haemodialysis and haemodiafiltration of tryptophan-derived uremic toxins in ESKD patients

    Get PDF
    Tryptophan is an essential dietary amino acid that originates uremic toxins that contribute to end-stage kidney disease (ESKD) patient outcomes. We evaluated serum levels and removal during haemodialysis and haemodiafiltration of tryptophan and tryptophan-derived uremic toxins, indoxyl sulfate (IS) and indole acetic acid (IAA), in ESKD patients in different dialysis treatment settings. This prospective multicentre study in four European dialysis centres enrolled 78 patients with ESKD. Blood and spent dialysate samples obtained during dialysis were analysed with high-performance liquid chromatography to assess uremic solutes, their reduction ratio (RR) and total removed solute (TRS). Mean free serum tryptophan and IS concentrations increased, and concentration of IAA decreased over pre-dialysis levels (67%, 49%, -0.8%, respectively) during the first hour of dialysis. While mean serum total urea, IS and IAA concentrations decreased during dialysis (-72%, -39%, -43%, respectively), serum tryptophan levels increased, resulting in negative RR (-8%) towards the end of the dialysis session (p < 0.001), despite remarkable Trp losses in dialysate. RR and TRS values based on serum (total, free) and dialysate solute concentrations were lower for conventional low-flux dialysis (p < 0.001). High-efficiency haemodiafiltration resulted in 80% higher Trp losses than conventional low-flux dialysis, despite similar neutral Trp RR values. In conclusion, serum Trp concentrations and RR behave differently from uremic solutes IS, IAA and urea and Trp RR did not reflect dialysis Trp losses. Conventional low-flux dialysis may not adequately clear Trp-related uremic toxins while high efficiency haemodiafiltration increased Trp losses

    A Comprehensive Study on Pain Assessment from Multimodal Sensor Data

    Get PDF
    Pain assessment is a critical aspect of healthcare, influencing timely interventions and patient well-being. Traditional pain evaluation methods often rely on subjective patient reports, leading to inaccuracies and disparities in treatment, especially for patients who present difficulties to communicate due to cognitive impairments. Our contributions are three-fold. Firstly, we analyze the correlations of the data extracted from biomedical sensors. Then, we use state-of-the-art computer vision techniques to analyze videos focusing on the facial expressions of the patients, both per-frame and using the temporal context. We compare them and provide a baseline for pain assessment methods using two popular benchmarks: UNBC-McMaster Shoulder Pain Expression Archive Database and BioVid Heat Pain Database. We achieved an accuracy of over 96% and over 94% for the F1 Score, recall and precision metrics in pain estimation using single frames with the UNBC-McMaster dataset, employing state-of-the-art computer vision techniques such as Transformer-based architectures for vision tasks. In addition, from the conclusions drawn from the study, future lines of work in this area are discussed
    corecore