12 research outputs found

    Assessing mental health service user and carer involvement in physical health care planning: The development and validation of a new patient-reported experience measure

    Get PDF
    BackgroundPeople living with serious mental health conditions experience increased morbidity due to physical health issues driven by medication side-effects and lifestyle factors. Coordinated mental and physical healthcare delivered in accordance with a care plan could help to reduce morbidity and mortality in this population. Efforts to develop new models of care are hampered by a lack of validated instruments to accurately assess the extent to which mental health services users and carers are involved in care planning for physical health.ObjectiveTo develop a brief and accurate patient-reported experience measure (PREM) capable of assessing involvement in physical health care planning for mental health service users and their carers.MethodsWe employed psychometric and statistical techniques to refine a bank of candidate questionnaire items, derived from qualitative interviews, into a valid and reliable measure involvement in physical health care planning. We assessed the psychometric performance of the item bank using modern psychometric analyses. We assessed unidimensionality, scalability, fit to the partial credit Rasch model, category threshold ordering, local dependency, differential item functioning, and test-retest reliability. Once purified of poorly performing and erroneous items, we simulated computerized adaptive testing (CAT) with 15, 10 and 5 items using the calibrated item bank.ResultsIssues with category threshold ordering, local dependency and differential item functioning were evident for a number of items in the nascent item bank and were resolved by removing problematic items. The final 19 item PREM had excellent fit to the Rasch model fit (x2 = 192.94, df = 1515, P = .02, RMSEA = .03 (95% CI = .01-.04). The 19-item bank had excellent reliability (marginal r = 0.87). The correlation between questionnaire scores at baseline and 2-week follow-up was high (r = .70, P DiscussionWe developed a flexible patient reported outcome measure to quantify service user and carer involvement in physical health care planning. We demonstrate the potential to substantially reduce assessment length whilst maintaining reliability by utilizing CAT

    The Use of the FACE-Q Aesthetic:A Narrative Review

    Get PDF
    INTRODUCTION: In the past decade there has been an increasing interest in the field of patient-reported outcome measures (PROMs) which are now commonly used alongside traditional outcome measures, such as morbidity and mortality. Since the FACE-Q Aesthetic development in 2010, it has been widely used in clinical practice and research, measuring the quality of life and patient satisfaction. It quantifies the impact and change across different aspects of cosmetic facial surgery and minimally invasive treatments. We review how researchers have utilized the FACE-Q Aesthetic module to date, and aim to understand better whether and how it has enhanced our understanding and practice of aesthetic facial procedures. METHODS: We performed a systematic search of the literature. Publications that used the FACE-Q Aesthetic module to evaluate patient outcomes were included. Publications about the development of PROMs or modifications of the FACE-Q Aesthetic, translation or validation studies of the FACE-Q Aesthetic scales, papers not published in English, reviews, comments/discussions, or letters to the editor were excluded. RESULTS: Our search produced 1189 different articles; 70 remained after applying in- and exclusion criteria. Significant findings and associations were further explored. The need for evidence-based patient-reported outcome caused a growing uptake of the FACE-Q Aesthetic in cosmetic surgery and dermatology an increasing amount of evidence concerning facelift surgery, botulinum toxin, rhinoplasty, soft tissue fillers, scar treatments, and experimental areas. DISCUSSION: The FACE-Q Aesthetic has been used to contribute substantial evidence about the outcome from the patient perspective in cosmetic facial surgery and minimally invasive treatments. The FACE-Q Aesthetic holds great potential to improve quality of care and may fundamentally change the way we measure success in plastic surgery and dermatology. LEVEL OF EVIDENCE III: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00266-022-02974-9

    Recursive partitioning vs computerized adaptive testing to reduce the burden of health assessments in cleft lip and/or palate : comparative simulation study

    Get PDF
    Background: Computerized adaptive testing (CAT) has been shown to deliver short, accurate, and personalized versions of the CLEFT-Q patient-reported outcome measure for children and young adults born with a cleft lip and/or palate. Decision trees may integrate clinician-reported data (eg, age, gender, cleft type, and planned treatments) to make these assessments even shorter and more accurate. Objective: We aimed to create decision tree models incorporating clinician-reported data into adaptive CLEFT-Q assessments and compare their accuracy to traditional CAT models. Methods: We used relevant clinician-reported data and patient-reported item responses from the CLEFT-Q field test to train and test decision tree models using recursive partitioning. We compared the prediction accuracy of decision trees to CAT assessments of similar length. Participant scores from the full-length questionnaire were used as ground truth. Accuracy was assessed through Pearson’s correlation coefficient of predicted and ground truth scores, mean absolute error, root mean squared error, and a two-tailed Wilcoxon signed-rank test comparing squared error. Results: Decision trees demonstrated poorer accuracy than CAT comparators and generally made data splits based on item responses rather than clinician-reported data. Conclusions: When predicting CLEFT-Q scores, individual item responses are generally more informative than clinician-reported data. Decision trees that make binary splits are at risk of underfitting polytomous patient-reported outcome measure data and demonstrated poorer performance than CATs in this study

    Machine learning in medicine: a practical introduction

    No full text
    Abstract Background Following visible successes on a wide range of predictive tasks, machine learning techniques are attracting substantial interest from medical researchers and clinicians. We address the need for capacity development in this area by providing a conceptual introduction to machine learning alongside a practical guide to developing and evaluating predictive algorithms using freely-available open source software and public domain data. Methods We demonstrate the use of machine learning techniques by developing three predictive models for cancer diagnosis using descriptions of nuclei sampled from breast masses. These algorithms include regularized General Linear Model regression (GLMs), Support Vector Machines (SVMs) with a radial basis function kernel, and single-layer Artificial Neural Networks. The publicly-available dataset describing the breast mass samples (N=683) was randomly split into evaluation (n=456) and validation (n=227) samples. We trained algorithms on data from the evaluation sample before they were used to predict the diagnostic outcome in the validation dataset. We compared the predictions made on the validation datasets with the real-world diagnostic decisions to calculate the accuracy, sensitivity, and specificity of the three models. We explored the use of averaging and voting ensembles to improve predictive performance. We provide a step-by-step guide to developing algorithms using the open-source R statistical programming environment. Results The trained algorithms were able to classify cell nuclei with high accuracy (.94 -.96), sensitivity (.97 -.99), and specificity (.85 -.94). Maximum accuracy (.96) and area under the curve (.97) was achieved using the SVM algorithm. Prediction performance increased marginally (accuracy =.97, sensitivity =.99, specificity =.95) when algorithms were arranged into a voting ensemble. Conclusions We use a straightforward example to demonstrate the theory and practice of machine learning for clinicians and medical researchers. The principals which we demonstrate here can be readily applied to other complex tasks including natural language processing and image recognition

    Effectiveness of routine provision of feedback from patient‐reported outcome measurements for cancer care improvement: a systematic review and meta-analysis

    No full text
    Abstract Background Research shows that feeding back patient-reported outcome information to clinicians and/or patients could be associated with improved care processes and patient outcomes. Quantitative syntheses of intervention effects on oncology patient outcomes are lacking. Objective To determine the effects of patient-reported outcome measure (PROM) feedback intervention on oncology patient outcomes. Data sources We identified relevant studies from 116 references included in our previous Cochrane review assessing the intervention for the general population. In May 2022, we conducted a systematic search in five bibliography databases using predefined keywords for additional studies published after the Cochrane review. Study selection We included randomized controlled trials evaluating the effects of PROM feedback intervention on processes and outcomes of care for oncology patients. Data extraction and synthesis We used the meta-analytic approach to synthesize across studies measuring the same outcomes. We estimated pooled effects of the intervention on outcomes using Cohen’s d for continuous data and risk ratio (RR) with a 95% confidence interval for dichotomous data. We used a descriptive approach to summarize studies which reported insufficient data for a meta-analysis. Main outcome(s) and measures(s) Health-related quality of life (HRQL), symptoms, patient-healthcare provider communication, number of visits and hospitalizations, number of adverse events, and overall survival. Results We included 29 studies involving 7071 cancer participants. A small number of studies was available for each metanalysis (median = 3 studies, ranging from 2 to 9 studies) due to heterogeneity in the evaluation of the trials. We found that the intervention improved HRQL (Cohen’s d = 0.23, 95% CI 0.11–0.34), mental functioning (Cohen’s d = 0.14, 95% CI 0.02–0.26), patient-healthcare provider communication (Cohen’s d = 0.41, 95% CI 0.20–0.62), and 1-year overall survival (OR = 0.64, 95% CI 0.48–0.86). The risk of bias across studies was considerable in the domains of allocation concealment, blinding, and intervention contamination. Conclusions and relevance Although we found evidence to support the intervention for highly relevant outcomes, our conclusions are tempered by the high risk of bias relating mainly to intervention design. PROM feedback for oncology patients may improve processes and outcomes for cancer patients but more high-quality evidence is required

    Deriving an overall appearance domain score by applying bifactor IRT analysis to the BODY-Q appearance scales

    No full text
    Purpose With the BODY-Q, one can assess outcomes, such as satisfaction with appearance, in weight loss and body contouring patients using multiple scales. All scales can be used independently in any given combination or order. Currently, the BODY-Q cannot provide overall appearance scores across scales that measure a similar super-ordinate construct (i.e., overall appearance), which could improve the scales' usefulness as a benchmarking tool and improve the comprehensibility of patient feedback. We explored the possibility of establishing overall appearance scores, by applying a bifactor model to the BODY-Q appearance scales. Methods In a bifactor model, questionnaire items load onto both a primary specific factors and a general factor, such as satisfaction with appearance. The international BODY-Q validation patient sample (n = 734) was used to fit a bifactor model to the appearance domain. Factor loadings, fit indices, and correlation between bifactor appearance domain and satisfaction with body scale were assessed. Results All items loaded on the general factor of their corresponding domain. In the appearance domain, all items demonstrated adequate item fit to the model. All scales had satisfactory fit to the bifactor model (RMSEA 0.045, CFI 0.969, and TLI 0.964). The correlation between the appearance domain summary scores and satisfaction with body scale scores was found to be 0.77. Discussion We successfully applied a bifactor model to BODY-Q data with good item and model fit indices. With this method, we were able to produce reliable overall appearance scores which may improve the interpretability of the BODY-Q while increasing flexibility.</p

    The Use of the FACE-Q Aesthetic: A Narrative Review

    No full text
    Introduction: In the past decade there has been an increasing interest in the field of patient-reported outcome measures (PROMs) which are now commonly used alongside traditional outcome measures, such as morbidity and mortality. Since the FACE-Q Aesthetic development in 2010, it has been widely used in clinical practice and research, measuring the quality of life and patient satisfaction. It quantifies the impact and change across different aspects of cosmetic facial surgery and minimally invasive treatments. We review how researchers have utilized the FACE-Q Aesthetic module to date, and aim to understand better whether and how it has enhanced our understanding and practice of aesthetic facial procedures. Methods: We performed a systematic search of the literature. Publications that used the FACE-Q Aesthetic module to evaluate patient outcomes were included. Publications about the development of PROMs or modifications of the FACE-Q Aesthetic, translation or validation studies of the FACE-Q Aesthetic scales, papers not published in English, reviews, comments/discussions, or letters to the editor were excluded. Results: Our search produced 1189 different articles; 70 remained after applying in- and exclusion criteria. Significant findings and associations were further explored. The need for evidence-based patient-reported outcome caused a growing uptake of the FACE-Q Aesthetic in cosmetic surgery and dermatology an increasing amount of evidence concerning facelift surgery, botulinum toxin, rhinoplasty, soft tissue fillers, scar treatments, and experimental areas. Discussion: The FACE-Q Aesthetic has been used to contribute substantial evidence about the outcome from the patient perspective in cosmetic facial surgery and minimally invasive treatments. The FACE-Q Aesthetic holds great potential to improve quality of care and may fundamentally change the way we measure success in plastic surgery and dermatology. Level of Evidence III: This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266

    Development and Validation of a Novel Literature-Based Method to Identify Disparity-Sensitive Surgical Quality Metrics

    No full text
    BACKGROUND: Disparity in surgical care impedes the delivery of uniformly high-quality care. Metrics that quantify disparity in care can help identify areas for needed intervention. A literature-based Disparity-Sensitive Score (DSS) system for surgical care was adapted by the Metrics for Equitable Access and Care in Surgery (MEASUR) group. The alignment between the MEASUR DSS and Delphi ratings of an expert advisory panel (EAP) regarding the disparity sensitivity of surgical quality metrics was assessed. STUDY DESIGN: Using DSS criteria MEASUR co-investigators scored 534 surgical metrics which were subsequently rated by the EAP. All scores were converted to a 9-point scale. Agreement between the new measurement technique (ie DSS) and an established subjective technique (ie importance and validity ratings) were assessed using the Bland-Altman method, adjusting for the linear relationship between the paired difference and the paired average. The limit of agreement (LOA) was set at 1.96 SD (95%). RESULTS: The percentage of DSS scores inside the LOA was 96.8% (LOA, 0.02 points) for the importance rating and 94.6% (LOA, 1.5 points) for the validity rating. In comparison, 94.4% of the 2 subjective EAP ratings were inside the LOA (0.7 points). CONCLUSIONS: Applying the MEASUR DSS criteria using available literature allowed for identification of disparity-sensitive surgical metrics. The results suggest that this literature-based method of selecting quality metrics may be comparable to more complex consensus-based Delphi methods. In fields with robust literature, literature-based composite scores may be used to select quality metrics rather than assembling consensus panels. (J Am Coll Surg 2023;237:856–861
    corecore