172 research outputs found

    Reliability and validity of a self-administration version of DEMQOL-Proxy.

    Get PDF
    : This study aimed to investigate the reliability and validity of a self-administered version of DEMQOL-Proxy, a disease-specific instrument that measures health-related quality of life in people with dementia. : The sample consisted of 173 informal carers of people with dementia, aged 29 to 89 years old. Carers were mostly female, White/White British and closely related to the patient. They completed DEMQOL-Proxy (self-administered), EQ-5D-3L (proxy reported about the person with dementia), EQ-5D-3L (self-reported about their own health) and the Zarit Burden Interview. Using well-established methods from classical test theory, we evaluated scale level acceptability, reliability and convergent, discriminant and known-groups validity of DEMQOL-Proxy. : DEMQOL-Proxy (self-administered) showed high acceptability (3.5% missing data and 0% scores at floor or ceiling), high internal consistency reliability (α = 0.93) and good convergent and discriminant validity. Amongst others, we found a moderately high correlation with EQ-5D-3L proxy reported (r = 0.52) and low to essentially zero correlations with EQ-5D-3L self-reported (r = 0.20) and carer and patient background variables (r ≤ 0.20). As predicted, DEMQOL-Proxy (self-administered) showed a modest correlation with DEMQOL (r = 0.32). Known-groups differences on health-related quality of life (comparing people with versus people without cognitive impairment) were of moderate effect size (d = 0.38) and in the expected direction. : DEMQOL-Proxy (self-administered) has comparable acceptability, reliability and validity with DEMQOL-Proxy (interviewer administered). DEMQOL-Proxy (self-administered) can be used in a wider variety of contexts than its interviewer-administered version, including routine use in busy clinics. Copyright © 2016 John Wiley &amp; Sons, Ltd.<br/

    The Leeds Assessment of Neuropathic Symptoms and Signs Scale (LANSS) is not an adequate outcome measure of pressure ulcer-related neuropathic pain

    Get PDF
    Background: Few pain assessment scales have been used in Pressure Ulcer (PU) research and none developed or validated for people with PUs. We examined the Leeds Assessment of Neuropathic Symptoms and Signs (LANSS) scale to determine its utility as an outcome measure for people with pressure-area related pain. Methods: LANSS data from 728 participants underwent psychometric analyses: traditional tests for data quality, scaling assumptions, reliability and validity and a Rasch analysis including tests of fit, spread and targeting of item locations, response dependency, person separation index (reliability) and differential item functioning. Results: Our findings offer support for a unidimensional scale; confirmatory factor analysis indicated a non-significant Chi-Square test of model fit ((df =14) 23.48, p= 0.053). However, some misfit was identified at the overall scale and individual item levels, and internal construct validity of the LANSS as an outcome measure for neuropathic pain in people with pressure-area related pain was not supported; low to moderate item-total correlations (Chi Square (df = 28) 55.546, p = 0.002) and inter-item correlations (mean 0.117 and range from 0.063 - 0.415); and low Cronbach’s alpha (0.549) and Person Separation Index (0.334). Conclusions: Requirements for reliable and valid measurement do not support the use of the LANSS as an outcome measure in people with PUs at the individual level or as a generalised measurement scale of neuropathic pain across ulcer severity groups. Expanding the number of items to aid differentiation between neuropathic pain levels and improving scale reliability is recommended

    Variable length testing using the ordinal regression model.

    Get PDF
    Health questionnaires are often built up from sets of questions that are totaled to obtain a sum score. An important consideration in designing questionnaires is to minimize respondent burden. An increasingly popular method for efficient measurement is computerized adaptive testing; unfortunately, many health questionnaires do not meet the requirements for this method. In this paper, a new sequential method for efficiently obtaining sum scores via the computer is introduced, which does not have such requirements and is based on the ordinal regression model. In the assessment, future scores are predicted from past responses, and when an acceptable level of uncertainty is achieved, the procedure is terminated. Two simulation studies were performed to illustrate the usefulness of the procedure. The first used artificially generated symptom scores, and the second was a post hoc simulation using real responses on the Center for Epidemiologic Studies Depression scale. In both studies, the sequential method substantially reduced the respondent burden while maintaining a high sum score quality. Benefits and limitations of this new methodology are discussed. © 2013 John Wiley & Sons, Ltd

    Rating the methodological quality in systematic reviews of studies on measurement properties: a scoring system for the COSMIN checklist

    Get PDF
    Background: The COSMIN checklist is a standardized tool for assessing the methodological quality of studies on measurement properties. It contains 9 boxes, each dealing with one measurement property, with 5-18 items per box about design aspects and statistical methods. Our aim was to develop a scoring system for the COSMIN checklist to calculate quality scores per measurement property when using the checklist in systematic reviews of measurement properties. Methods: The scoring system was developed based on discussions among experts and testing of the scoring system on 46 articles from a systematic review. Four response options were defined for each COSMIN item (excellent, good, fair, and poor). A quality score per measurement property is obtained by taking the lowest rating of any item in a box ("worst score counts"). Results: Specific criteria for excellent, good, fair, and poor quality for each COSMIN item are described. In defining the criteria, the "worst score counts" algorithm was taken into consideration. This means that only fatal flaws were defined as poor quality. The scores of the 46 articles show how the scoring system can be used to provide an overview of the methodological quality of studies included in a systematic review of measurement properties. Conclusions: Based on experience in testing this scoring system on 46 articles, the COSMIN checklist with the proposed scoring system seems to be a useful tool for assessing the methodological quality of studies included in systematic reviews of measurement properties. © The Author(s) 2011

    Validation of an Estonian version of the Parkinson's Disease Questionnaire (PDQ-39)

    Get PDF
    <p>Abstract</p> <p>Introduction</p> <p>Diagnosis and management of Parkinson's disease (PD) rely heavily on evaluation of clinical symptoms and patients' subjective perception of their condition. The purpose of this study was to evaluate the validity, acceptability, and reliability of the Estonian version of the 39-question Parkinson 's disease Questionnaire (PDQ-39).</p> <p>Methods</p> <p>Study subjects were approached during their regular clinic follow-up visits. 104 patients consented to the study and 81 completed questionnaires were used for subsequent testing of psychometric characteristics, validity and reliability.</p> <p>Results</p> <p>The content validity was assessed through qualitative content analysis during the pilot study. The patients indicated that the questions were relevant to measure the quality of life of people with PD.</p> <p>The analysis of means showed that the ceiling and floor effects of domain results were within the limits of 15% of Summary Index and of all domains except Stigma, Social Support and Communication where the ceiling effect was 16% to 24% of the responses. Convergent validity was interpreted through correlation between disease severity and PDQ-39 domains. There was a statistically significant difference between the domain scores in patients with mild versus moderate PD in domains of Mobility, ADL, and Communication but not for Stigma, Social Support and Cognition. The reliability was good, Cronbach alpha for all domains and summary index was over 0.8 and item-test correlations between domains and summary index ranged from 0.56 to 0.83.</p> <p>Conclusion</p> <p>The psychometric characteristics of an Estonian version of the PDQ-39 were satisfactory. The results of this study were comparable to the results of previous validation studies in other cultural settings in UK, USA, Canada, Spain and Italy.</p> <p>The Estonian version of the PDQ-39 is an acceptable, valid and reliable instrument for quality of life measurement in PD patients.</p

    Prioritisation of patients on waiting lists for hip and knee arthroplasties and cataract surgery: Instruments validation

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Prioritisation instruments were developed for patients on waiting list for hip and knee arthroplasties (AI) and cataract surgery (CI). The aim of the study was to assess their convergent and discriminant validity and inter-observer reliability.</p> <p>Methods</p> <p>Multicentre validation study which included orthopaedic surgeons and ophthalmologists from 10 hospitals. Participating doctors were asked to include all eligible patients placed in the waiting list for the procedures under study during the medical visit. Doctors assessed patients' priority through a visual analogue scale (VAS) and administered the prioritisation instrument. Information on socio-demographic data and health-related quality of life (HRQOL) (HUI3, EQ-5D, WOMAC and VF-14) was obtained through a telephone interview with patients. The correlation coefficients between the prioritisation instrument score and VAS and HRQOL were calculated. For the reliability study a self-administered questionnaire, which included hypothetic patients' scenarios, was sent via postal mail to the doctors. The priority of these scenarios was assessed through the prioritisation instrument. The intraclass correlation coefficient (ICC) between doctors was calculated.</p> <p>Results</p> <p>Correlations with VAS were strong for the AI (0.64, CI95%: 0.59–0.68) and for the CI (0.65, CI95%: 0.62–0.69), and moderate between the WOMAC and the AI (0.39, CI95%: 0.33–0.45) and the VF-14 and the CI (0.38, IC95%: 0.33–0.43). The results of the discriminant analysis were in general as expected. Inter-observer reliability was 0.79 (CI95%: 0.64–0.94) for the AI, and 0.79 (CI95%: 0.63–0.95) for the CI.</p> <p>Conclusion</p> <p>The results show acceptable validity and reliability of the prioritisation instruments in establishing priority for surgery.</p

    Predicting implementation from organizational readiness for change: a study protocol

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>There is widespread interest in measuring organizational readiness to implement evidence-based practices in clinical care. However, there are a number of challenges to validating organizational measures, including inferential bias arising from the halo effect and method bias - two threats to validity that, while well-documented by organizational scholars, are often ignored in health services research. We describe a protocol to comprehensively assess the psychometric properties of a previously developed survey, the Organizational Readiness to Change Assessment.</p> <p>Objectives</p> <p>Our objective is to conduct a comprehensive assessment of the psychometric properties of the Organizational Readiness to Change Assessment incorporating methods specifically to address threats from halo effect and method bias.</p> <p>Methods and Design</p> <p>We will conduct three sets of analyses using longitudinal, secondary data from four partner projects, each testing interventions to improve the implementation of an evidence-based clinical practice. Partner projects field the Organizational Readiness to Change Assessment at baseline (n = 208 respondents; 53 facilities), and prospectively assesses the degree to which the evidence-based practice is implemented. We will conduct predictive and concurrent validities using hierarchical linear modeling and multivariate regression, respectively. For predictive validity, the outcome is the change from baseline to follow-up in the use of the evidence-based practice. We will use intra-class correlations derived from hierarchical linear models to assess inter-rater reliability. Two partner projects will also field measures of job satisfaction for convergent and discriminant validity analyses, and will field Organizational Readiness to Change Assessment measures at follow-up for concurrent validity (n = 158 respondents; 33 facilities). Convergent and discriminant validities will test associations between organizational readiness and different aspects of job satisfaction: satisfaction with leadership, which should be highly correlated with readiness, versus satisfaction with salary, which should be less correlated with readiness. Content validity will be assessed using an expert panel and modified Delphi technique.</p> <p>Discussion</p> <p>We propose a comprehensive protocol for validating a survey instrument for assessing organizational readiness to change that specifically addresses key threats of bias related to halo effect, method bias and questions of construct validity that often go unexplored in research using measures of organizational constructs.</p
    corecore