32 research outputs found

    MO4 - Using AHP weights to fill missing gaps in Markov decision models

    Get PDF
    OBJECTIVES:\ud We propose to combine the versatility of the analytic hierarchy process (AHP) with the decision-analytic sophistication of health-economic modeling in a new methodology for early technology assessment. As an illustration, we apply this methodology to a new technology to diagnose breast cancer.\ud \ud METHODS:\ud The AHP is a technique for multicriteria analysis, relatively new in the fi eld of technology assessment. It can integrate both quantitative and qualitative criteria in the assessment of alternative technologies. We applied the AHP to prioritize a more versatile set of outcome measures than most Markov models do. These outcome measures include clinical effectiveness and costs, but also weighted estimates of patient comfort and safety. Furthermore, as no clinical data are available for this technology yet, the AHP is applied to predict the performance of the new technology with regard to all these outcome measures. Results of the AHP are subsequently integrated in a Markov model to make an early assessment of the expected incremental cost-effectiveness of alternative technologies.\ud \ud RESULTS:\ud We systematically estimated priors on the clinical effectiveness and wider impacts of the new technology using AHP. In our illustration, AHP estimates for sensitivity and specifi city of the new diagnostic technology were used as probability parameters in the Markov model. Moreover, the prioritized outcome measures including clinical effectiveness (weight = 0.61), patient comfort (weight = 0.09), and safety (weight = 0.30) were integrated into one outcome measure in the Markov model.\ud \ud CONCLUSIONS:\ud Combining AHP and Markov modelling is particularly valuable in early technology assessment when evidence about the effectiveness of health care technology is still limited or missing. Moreover, combining these methods is valuable when decision makers are interested in other patient relevant outcomes measures besides the technology’s clinical effectiveness, and that may not (adequately or explicitly) be captured in mainstream utility measures

    The influence of different intermittent myofeedback training schedules on learning relaxation of the trapezius muscle while performing a gross-motor task

    Get PDF
    The aim of this study was to investigate the influence of different intermittent myofeedback training schedules, as provided by a Cinderella-based myofeedback system, on learning relaxation and resistance to extinction of the trapezius muscle, in subjects performing a unilateral gross-motor task. Eighteen healthy subjects performed the task without and with feedback to study baseline and learning relaxation. Subsequently, resistance to extinction was investigated by performing the task without feedback. The gross-motor task consisted of continuously moving the dominant arm between three target areas at a constant pace. Subjects were randomly assigned into three groups, characterized by the sequence of feedback schedules with which the task was performed on 3 consecutive days. Auditory feedback was provided after a 5-, 10-, or 20-s interval when a pre-set level of 80% rest was not reached. Bipolar surface electromyography recordings performed at the dominant upper trapezius muscle were quantified using relative rest time (RRT) and root mean square (RMS) parameters. Learning relaxation was defined as an increase in RRT and a decrease in RMS values. Results showed the highest RRT levels as well as a decrease in RMS for the 10-s schedule. Additionally, the 10-s schedule was unique in its ability to elevate muscular rest above the 20% level, which may be considered relevant in preventing myalgia. None of the three schedules showed resistance to extinction. It was concluded that the 10-s interval was preferred over the 5- and 20-s schedules in learning trapezius relaxation in subjects performing a unilateral gross-motor task

    Comparing Discrete Choice Experiment with Swing Weighting to Estimate Attribute Relative Importance:A Case Study in Lung Cancer Patient Preferences

    Get PDF
    Introduction: Discrete choice experiments (DCE) are commonly used to elicit patient preferences and to determine the relative importance of attributes but can be complex and costly to administer. Simpler methods that measure relative importance exist, such as swing weighting with direct rating (SW-DR), but there is little empirical evidence comparing the two. This study aimed to directly compare attribute relative importance rankings and weights elicited using a DCE and SW-DR. Methods: A total of 307 patients with non–small-cell lung cancer in Italy and Belgium completed an online survey assessing preferences for cancer treatment using DCE and SW-DR. The relative importance of the attributes was determined using a random parameter logit model for the DCE and rank order centroid method (ROC) for SW-DR. Differences in relative importance ranking and weights between the methods were assessed using Cohen’s weighted kappa and Dirichlet regression. Feedback on ease of understanding and answering the 2 tasks was also collected. Results: Most respondents (&gt;65%) found both tasks (very) easy to understand and answer. The same attribute, survival, was ranked most important irrespective of the methods applied. The overall ranking of the attributes on an aggregate level differed significantly between DCE and SW-ROC (P &lt; 0.01). Greater differences in attribute weights between attributes were reported in DCE compared with SW-DR (P &lt; 0.01). Agreement between the individual-level attribute ranking across methods was moderate (weighted Kappa 0.53–0.55). Conclusion: Significant differences in attribute importance between DCE and SW-DR were found. Respondents reported both methods being relatively easy to understand and answer. Further studies confirming these findings are warranted. Such studies will help to provide accurate guidance for methods selection when studying relative attribute importance across a wide array of preference-relevant decisions. Both DCEs and SW tasks can be used to determine attribute relative importance rankings and weights; however, little evidence exists empirically comparing these methods in terms of outcomes or respondent usability. Most respondents found the DCE and SW tasks very easy or easy to understand and answer. A direct comparison of DCE and SW found significant differences in attribute importance rankings and weights as well as a greater spread in the DCE-derived attribute relative importance weights.</p

    Comparing Discrete Choice Experiment with Swing Weighting to Estimate Attribute Relative Importance:A Case Study in Lung Cancer Patient Preferences

    Get PDF
    Introduction: Discrete choice experiments (DCE) are commonly used to elicit patient preferences and to determine the relative importance of attributes but can be complex and costly to administer. Simpler methods that measure relative importance exist, such as swing weighting with direct rating (SW-DR), but there is little empirical evidence comparing the two. This study aimed to directly compare attribute relative importance rankings and weights elicited using a DCE and SW-DR. Methods: A total of 307 patients with non–small-cell lung cancer in Italy and Belgium completed an online survey assessing preferences for cancer treatment using DCE and SW-DR. The relative importance of the attributes was determined using a random parameter logit model for the DCE and rank order centroid method (ROC) for SW-DR. Differences in relative importance ranking and weights between the methods were assessed using Cohen’s weighted kappa and Dirichlet regression. Feedback on ease of understanding and answering the 2 tasks was also collected. Results: Most respondents (&gt;65%) found both tasks (very) easy to understand and answer. The same attribute, survival, was ranked most important irrespective of the methods applied. The overall ranking of the attributes on an aggregate level differed significantly between DCE and SW-ROC (P &lt; 0.01). Greater differences in attribute weights between attributes were reported in DCE compared with SW-DR (P &lt; 0.01). Agreement between the individual-level attribute ranking across methods was moderate (weighted Kappa 0.53–0.55). Conclusion: Significant differences in attribute importance between DCE and SW-DR were found. Respondents reported both methods being relatively easy to understand and answer. Further studies confirming these findings are warranted. Such studies will help to provide accurate guidance for methods selection when studying relative attribute importance across a wide array of preference-relevant decisions. Both DCEs and SW tasks can be used to determine attribute relative importance rankings and weights; however, little evidence exists empirically comparing these methods in terms of outcomes or respondent usability. Most respondents found the DCE and SW tasks very easy or easy to understand and answer. A direct comparison of DCE and SW found significant differences in attribute importance rankings and weights as well as a greater spread in the DCE-derived attribute relative importance weights.</p

    Comparing Discrete Choice Experiment with Swing Weighting to Estimate Attribute Relative Importance:A Case Study in Lung Cancer Patient Preferences

    Get PDF
    Introduction: Discrete choice experiments (DCE) are commonly used to elicit patient preferences and to determine the relative importance of attributes but can be complex and costly to administer. Simpler methods that measure relative importance exist, such as swing weighting with direct rating (SW-DR), but there is little empirical evidence comparing the two. This study aimed to directly compare attribute relative importance rankings and weights elicited using a DCE and SW-DR. Methods: A total of 307 patients with non–small-cell lung cancer in Italy and Belgium completed an online survey assessing preferences for cancer treatment using DCE and SW-DR. The relative importance of the attributes was determined using a random parameter logit model for the DCE and rank order centroid method (ROC) for SW-DR. Differences in relative importance ranking and weights between the methods were assessed using Cohen’s weighted kappa and Dirichlet regression. Feedback on ease of understanding and answering the 2 tasks was also collected. Results: Most respondents (&gt;65%) found both tasks (very) easy to understand and answer. The same attribute, survival, was ranked most important irrespective of the methods applied. The overall ranking of the attributes on an aggregate level differed significantly between DCE and SW-ROC (P &lt; 0.01). Greater differences in attribute weights between attributes were reported in DCE compared with SW-DR (P &lt; 0.01). Agreement between the individual-level attribute ranking across methods was moderate (weighted Kappa 0.53–0.55). Conclusion: Significant differences in attribute importance between DCE and SW-DR were found. Respondents reported both methods being relatively easy to understand and answer. Further studies confirming these findings are warranted. Such studies will help to provide accurate guidance for methods selection when studying relative attribute importance across a wide array of preference-relevant decisions. Both DCEs and SW tasks can be used to determine attribute relative importance rankings and weights; however, little evidence exists empirically comparing these methods in terms of outcomes or respondent usability. Most respondents found the DCE and SW tasks very easy or easy to understand and answer. A direct comparison of DCE and SW found significant differences in attribute importance rankings and weights as well as a greater spread in the DCE-derived attribute relative importance weights.</p

    Comparing Discrete Choice Experiment with Swing Weighting to Estimate Attribute Relative Importance: A Case Study in Lung Cancer Patient Preferences

    Get PDF
    Introduction: Discrete choice experiments (DCE) are commonly used to elicit patient preferences and to determine the relative importance of attributes but can be complex and costly to administer. Simpler methods that measure relative importance exist, such as swing weighting with direct rating (SW-DR), but there is little empirical evidence comparing the two. This study aimed to directly compare attribute relative importance rankings and weights elicited using a DCE and SW-DR. Methods: A total of 307 patients with non–small-cell lung cancer in Italy and Belgium completed an online survey assessing preferences for cancer treatment using DCE and SW-DR. The relative importance of the attributes was determined using a random parameter logit model for the DCE and rank order centroid method (ROC) for SW-DR. Differences in relative importance ranking and weights between the methods were assessed using Cohen’s weighted kappa and Dirichlet regression. Feedback on ease of understanding and answering the 2 tasks was also collected. Results: Most respondents (>65%) found both tasks (very) easy to understand and answer. The same attribute, survival, was ranked most important irrespective of the methods applied. The overall ranking of the attributes on an aggregate level differed significantly between DCE and SW-ROC (P < 0.01). Greater differences in attribute weights between attributes were reported in DCE compared with SW-DR (P < 0.01). Agreement between the individual-level attribute ranking across methods was moderate (weighted Kappa 0.53–0.55). Conclusion: Significant differences in attribute importance between DCE and SW-DR were found. Respondents reported both methods being relatively easy to understand and answer. Further studies confirming these findings are warranted. Such studies will help to provide accurate guidance for methods selection when studying relative attribute importance across a wide array of preference-relevant decisions. Both DCEs and SW tasks can be used to determine attribute relative importance rankings and weights; however, little evidence exists empirically comparing these methods in terms of outcomes or respondent usability. Most respondents found the DCE and SW tasks very easy or easy to understand and answer. A direct comparison of DCE and SW found significant differences in attribute importance rankings and weights as well as a greater spread in the DCE-derived attribute relative importance weights

    Withdrawing biologics in non-systemic JIA:what matters to pediatric rheumatologists?

    Get PDF
    Abstract Objective Approximately one third of children with JIA receive biologic therapy, but evidence on biologic therapy withdrawal is lacking. This study aims to increase our understanding of whether and when pediatric rheumatologists postpone a decision to withdraw biologic therapy in children with clinically inactive non-systemic JIA. Methods A survey containing questions about background characteristics, treatment patterns, minimum treatment time with biologic therapy, and 16 different patient vignettes, was distributed among 83 pediatric rheumatologists in Canada and the Netherlands. For each vignette, respondents were asked whether they would withdraw biologic therapy at their minimum treatment time, and if not, how long they would continue biologic therapy. Statistical analysis included descriptive statistics, logistic and interval regression analysis. Results Thirty-three pediatric rheumatologists completed the survey (40% response rate). Pediatric rheumatologists are most likely to postpone the decision to withdraw biologic therapy when the child and/or parents express a preference for continuation (OR 6.3; p < 0.001), in case of a flare in the current treatment period (OR 3.9; p = 0.001), and in case of uveitis in the current treatment period (OR 3.9; p < 0.001). On average, biologic therapy withdrawal is initiated 6.7 months later when the child or parent prefer to continue treatment. Conclusion Patient’s and parents' preferences were the strongest driver of a decision to postpone biologic therapy withdrawal in children with clinically inactive non-systemic JIA and prolongs treatment duration. These findings highlight the potential benefit of a tool to support pediatric rheumatologists, patients and parents in decision making, and can help inform its design

    Incorporating MCDA into HTA: challenges and potential solutions, with a focus on lower income settings

    Get PDF
    Background: Multicriteria decision analysis (MCDA) has the potential to bring more structure and transparency to health technology assessment (HTA). The objective of this paper is to highlight key methodological and practical challenges facing the use of MCDA for HTA, with a particular focus on lower and middle-income countries (LMICs), and to highlight potential solutions to these challenges. Methodological challenges: Key lessons from existing applications of MCDA to HTA are summarized, including: that the socio-technical design of the MCDA reflect the local decision problem; the criteria set properties of additive models are understood and applied; and the alternative approaches for estimating opportunity cost, and the challenges with these approaches are understood. Practical challenges: Existing efforts to implement HTA in LMICs suggest a number of lessons that can help overcome the practical challenges facing the implementation of MCDA in LMICs, including: adapting inputs from other settings and from expert opinion; investing in technical capacity; embedding the MCDA in the decision-making process; and ensuring that the MCDA design reflects local cultural and social factors. Conclusion: MCDA has the potential to improve decision making in LMICs. For this potential to be achieved, it is important that the lessons from existing applications of MCDA are learned
    corecore