264 research outputs found

    Interdependence, interaction, and relationships

    Get PDF
    Interdependence theory presents a logical analysis of the structure of interpersonal situations, offering a conceptual framework in which interdependence situations can be analyzed in terms of six dimensions. Specific situations present specific problems and opportunities, logically implying the relevance of specific motives and permitting their expression. Via the concept of transformation, the theory explains how interaction is shaped by broader considerations such as long-term goals and concern for a partner's welfare. The theory illuminates our understanding of social-cognitive processes that are of longstanding interest to psychologists such as cognition and affect, attribution, and self-presentation. The theory also explains adaptation to repeatedly encountered interdependence patterns, as well as the embodiment of such adaptations in interpersonal dispositions, relationship-specific motives, and social norms

    Rescaling quality of life values from discrete choice experiments for use as QALYs: a cautionary tale

    Get PDF
    Background: Researchers are increasingly investigating the potential for ordinal tasks such as ranking and discrete choice experiments to estimate QALY health state values. However, the assumptions of random utility theory, which underpin the statistical models used to provide these estimates, have received insufficient attention. In particular, the assumptions made about the decisions between living states and the death state are not satisfied, at least for some people. Estimated values are likely to be incorrectly anchored with respect to death (zero) in such circumstances. Methods: Data from the Investigating Choice Experiments for the preferences of older people CAPability instrument (ICECAP) valuation exercise were analysed. The values (previously anchored to the worst possible state) were rescaled using an ordinal model proposed previously to estimate QALY-like values. Bootstrapping was conducted to vary artificially the proportion of people who conformed to the conventional random utility model underpinning the analyses. Results: Only 26% of respondents conformed unequivocally to the assumptions of conventional random utility theory. At least 14% of respondents unequivocally violated the assumptions. Varying the relative proportions of conforming respondents in sensitivity analyses led to large changes in the estimated QALY values, particularly for lower-valued states. As a result these values could be either positive (considered to be better than death) or negative (considered to be worse than death). Conclusion: Use of a statistical model such as conditional (multinomial) regression to anchor quality of life values from ordinal data to death is inappropriate in the presence of respondents who do not conform to the assumptions of conventional random utility theory. This is clearest when estimating values for that group of respondents observed in valuation samples who refuse to consider any living state to be worse than death: in such circumstances the model cannot be estimated. Only a valuation task requiring respondents to make choices in which both length and quality of life vary can produce estimates that properly reflect the preferences of all respondents

    Resource Modelling: The Missing Piece of the HTA Jigsaw?

    Get PDF
    Within health technology assessment (HTA), cost-effectiveness analysis and budget impact analyses have been broadly accepted as important components of decision making. However, whilst they address efficiency and affordability, the issue of implementation and feasibility has been largely ignored. HTA commonly takes place within a deliberative framework that captures issues of implementation and feasibility in a qualitative manner. We argue that only through a formal quantitative assessment of resource constraints can these issues be fully addressed. This paper argues the need for resource modelling to be considered explicitly in HTA. First, economic evaluation and budget impact models are described along with their limitations in evaluating feasibility. Next, resource modelling is defined and its usefulness is described along with examples of resource modelling from the literature. Then, the important issues that need to be considered when undertaking resource modelling are described before setting out recommendations for the use of resource modelling in HTA

    Reconsidering the use of rankings in the valuation of health states: a model for estimating cardinal values from ordinal data

    Get PDF
    BACKGROUND: In survey studies on health-state valuations, ordinal ranking exercises often are used as precursors to other elicitation methods such as the time trade-off (TTO) or standard gamble, but the ranking data have not been used in deriving cardinal valuations. This study reconsiders the role of ordinal ranks in valuing health and introduces a new approach to estimate interval-scaled valuations based on aggregate ranking data. METHODS: Analyses were undertaken on data from a previously published general population survey study in the United Kingdom that included rankings and TTO values for hypothetical states described using the EQ-5D classification system. The EQ-5D includes five domains (mobility, self-care, usual activities, pain/discomfort and anxiety/depression) with three possible levels on each. Rank data were analysed using a random utility model, operationalized through conditional logit regression. In the statistical model, probabilities of observed rankings were related to the latent utilities of different health states, modeled as a linear function of EQ-5D domain scores, as in previously reported EQ-5D valuation functions. Predicted valuations based on the conditional logit model were compared to observed TTO values for the 42 states in the study and to predictions based on a model estimated directly from the TTO values. Models were evaluated using the intraclass correlation coefficient (ICC) between predictions and mean observations, and the root mean squared error of predictions at the individual level. RESULTS: Agreement between predicted valuations from the rank model and observed TTO values was very high, with an ICC of 0.97, only marginally lower than for predictions based on the model estimated directly from TTO values (ICC = 0.99). Individual-level errors were also comparable in the two models, with root mean squared errors of 0.503 and 0.496 for the rank-based and TTO-based predictions, respectively. CONCLUSIONS: Modeling health-state valuations based on ordinal ranks can provide results that are similar to those obtained from more widely analyzed valuation techniques such as the TTO. The information content in aggregate ranking data is not currently exploited to full advantage. The possibility of estimating cardinal valuations from ordinal ranks could also simplify future data collection dramatically and facilitate wider empirical study of health-state valuations in diverse settings and population groups

    On the interpretation of removable interactions: A survey of the field 33 years after Loftus

    Get PDF
    In a classic 1978 Memory &Cognition article, Geoff Loftus explained why noncrossover interactions are removable. These removable interactions are tied to the scale of measurement for the dependent variable and therefore do not allow unambiguous conclusions about latent psychological processes. In the present article, we present concrete examples of how this insight helps prevent experimental psychologists from drawing incorrect conclusions about the effects of forgetting and aging. In addition, we extend the Loftus classification scheme for interactions to include those on the cusp between removable and nonremovable. Finally, we use various methods (i.e., a study of citation histories, a questionnaire for psychology students and faculty members, an analysis of statistical textbooks, and a review of articles published in the 2008 issue of Psychology andAging) to show that experimental psychologists have remained generally unaware of the concept of removable interactions. We conclude that there is more to interactions in a 2 × 2 design than meets the eye

    Reliability, construct validity and measurement potential of the ICF comprehensive core set for osteoarthritis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>This study aimed to investigate the reliability and construct validity of the International Classification of Functioning, Disability and Health (ICF) Comprehensive Core Set for osteoarthritis (OA) in order to test its possible use as a measuring tool for functioning.</p> <p>Methods</p> <p>100 patients with OA (84 F, 16 M; mean age 63 yr) completed forms including demographic and clinical information besides the Short Form (36) Health Survey (SF-36<sup>®</sup>) and the Western Ontario and McMaster Universities Index of Osteoarthritis (WOMAC). The ICF Comprehensive Core Set for OA was filled by health professionals. The internal construct validities of "Body Functions-Body structures" (BF-BS), "Activity" (A), "Participation" (P) and "Environmental Factors" (EF) domains were tested by Rasch analysis and reliability by internal consistency and person separation index (PSI). External construct validity was evaluated by correlating the Rasch transformed scores with SF-36 and WOMAC.</p> <p>Results</p> <p>In each scale, some items showing disordered thresholds were rescored, testlets were created to overcome the problem of local dependency and items that did not fit to the Rasch model were deleted. The internal construct validity of the four scales (BF-BS 16 items, A 8 items, P 7 items, EF 13 items) were good [mean item fit (SD) 0.138 (0.921), 0.216 (1.237), 0.759 (0.986) and -0.079 (2.200); person item fit (SD) -0.147 (0.652), -0.241 (0.894), -0.310 (1.187) and -0.491 (1.173) respectively], indicating a single underlying construct for each scale. The scales were free of differential item functioning (DIF) for age, gender, years of education and duration of disease. Reliabilities of the BF-BS, A, P, and EF scales were good with Cronbach's alphas of 0.79, 0.86, 0.88, and 0.83 and PSI's of 0.76, 0.86, 0.87, and 0.71, respectively. Rasch scores of BF-BS, A, and P showed moderate correlations with SF-36 and WOMAC scores where the EF had significant but weak correlations only with SF36-Social Functioning and SF36-Mental Health.</p> <p>Conclusion</p> <p>Since the four different scales derived from BF-BS, A, P, and EF components of the ICF core set for OA were shown to be valid and reliable through a combination of Rasch analysis and classical psychometric methods, these might be used as clinical assessment tools.</p

    Income Attainment among Victims of Violence: Results From a Preliminary Study

    Get PDF
    Violent victimisation may have many short-term psychological and physical outcomes. Occasionally, the negative aftermath of violence persists over time or induces other and more far-reaching consequences. Income attainment after victimisation is one of these outcomes. To date, previous studies have focussed on the income effects of violent victimisation during childhood and adolescence. Violence exposure during the early stages of the life course may frustrate processes of educational and occupational attainment and consequentially result in lower income levels. However, in addition or alternatively, many other and age-independent pathways between violent victimisation and income may be suggested. Prior studies appear to have paid little attention to this issue. Therefore, the purpose of the current study was to explore whether violent victimisation is associated with income levels several years after victimisation, irrespective of the age at which victimisation occurs. Victims of violence were recruited through the Dutch Victim Compensation Fund. To preliminary estimate the effect of violent victimisation on income, a comparable control group of non-victims was composed. The study sample contained 206 victims and 173 non-victims. Both bivariate correlational and multivariate statistical techniques suggested that violent victimisation is a significant predictor of income. Implications of the presented results were discussed with regard to future research and policy practice

    To Test or to Treat? An Analysis of Influenza Testing and Antiviral Treatment Strategies Using Economic Computer Modeling

    Get PDF
    BACKGROUND: Due to the unpredictable burden of pandemic influenza, the best strategy to manage testing, such as rapid or polymerase chain reaction (PCR), and antiviral medications for patients who present with influenza-like illness (ILI) is unknown.\ud \ud METHODOLOGY/PRINCIPAL FINDINGS: We developed a set of computer simulation models to evaluate the potential economic value of seven strategies under seasonal and pandemic influenza conditions: (1) using clinical judgment alone to guide antiviral use, (2) using PCR to determine whether to initiate antivirals, (3) using a rapid (point-of-care) test to determine antiviral use, (4) using a combination of a point-of-care test and clinical judgment, (5) using clinical judgment and confirming the diagnosis with PCR testing, (6) treating all with antivirals, and (7) not treating anyone with antivirals. For healthy younger adults (<65 years old) presenting with ILI in a seasonal influenza scenario, strategies were only cost-effective from the societal perspective. Clinical judgment, followed by PCR and point-of-care testing, was found to be cost-effective given a high influenza probability. Doubling hospitalization risk and mortality (representing either higher risk individuals or more virulent strains) made using clinical judgment to guide antiviral decision-making cost-effective, as well as PCR testing, point-of-care testing, and point-of-care testing used in conjunction with clinical judgment. For older adults (> or = 65 years old), in both seasonal and pandemic influenza scenarios, employing PCR was the most cost-effective option, with the closest competitor being clinical judgment (when judgment accuracy > or = 50%). Point-of-care testing plus clinical judgment was cost-effective with higher probabilities of influenza. Treating all symptomatic ILI patients with antivirals was cost-effective only in older adults.\ud \ud CONCLUSIONS/SIGNIFICANCE: Our study delineated the conditions under which different testing and antiviral strategies may be cost-effective, showing the importance of accuracy, as seen with PCR or highly sensitive clinical judgment.\ud \u
    corecore