27,312 research outputs found

    Choice and judgement in developing models for health technology assessment; a qualitative study

    Get PDF
    Introduction: The role of models in supporting health policy decisions is reliant on model credibility. Credibility is fundamentally determined by the choices and judgements that people make in the process of developing a model. However, the method of uncovering choices and making judgements in model development is largely unreported and is not addressed by modelling methods guidance. Methods: This qualitative study was part of a project examining errors in health technology assessment models. In-depth interviews with academic and commercial modellers were used to obtain descriptions of the model development process. Data were analysed using framework analysis and interpreted in the context of the methodological literature. Results: The activities involved in developing models were characterised according to the themes; understanding the decision problem, conceptual modelling, model implementation, model checking, and engaging with the decision maker. Finding and using evidence was frequently mentioned across these themes. There was marked variation between practitioners in the extent to which conceptual modelling was recognised as an activity distinct from model implementation. Discussion: Methodological approaches to addressing model credibility described in the wider modelling literature highlight the necessity to disentangle the conceptual modelling and implementation activities. Whilst interviewees talked of judgements and choice making throughout model development, discussion indicated that these were based upon skills and experience with no discussion of formal approaches. Methods are required that provide for a systematic approach to uncovering choices, to generating a shared view of consensus and divergence, and for making judgements and choices in model development

    Natural experiments: An overview of methods, approaches, and contributions to public health intervention research

    Get PDF
    Population health interventions are essential to reduce health inequalities and tackle other public health priorities, but they are not always amenable to experimental manipulation. Natural experiment (NE) approaches are attracting growing interest as a way of providing evidence in such circumstances. One key challenge in evaluating NEs is selective exposure to the intervention. Studies should be based on a clear theoretical understanding of the processes that determine exposure. Even if the observed effects are large and rapidly follow implementation, confidence in attributing these effects to the intervention can be improved by carefully considering alternative explanations. Causal inference can be strengthened by including additional design features alongside the principal method of effect estimation.NEstudies often rely on existing (including routinely collected) data. Investment in such data sources and the infrastructure for linking exposure and outcome data is essential if the potential for such studies to inform decision making is to be realized

    Analysis of time-to-event for observational studies: Guidance to the use of intensity models

    Full text link
    This paper provides guidance for researchers with some mathematical background on the conduct of time-to-event analysis in observational studies based on intensity (hazard) models. Discussions of basic concepts like time axis, event definition and censoring are given. Hazard models are introduced, with special emphasis on the Cox proportional hazards regression model. We provide check lists that may be useful both when fitting the model and assessing its goodness of fit and when interpreting the results. Special attention is paid to how to avoid problems with immortal time bias by introducing time-dependent covariates. We discuss prediction based on hazard models and difficulties when attempting to draw proper causal conclusions from such models. Finally, we present a series of examples where the methods and check lists are exemplified. Computational details and implementation using the freely available R software are documented in Supplementary Material. The paper was prepared as part of the STRATOS initiative.Comment: 28 pages, 12 figures. For associated Supplementary material, see http://publicifsv.sund.ku.dk/~pka/STRATOSTG8

    Towards Design Principles for Data-Driven Decision Making: An Action Design Research Project in the Maritime Industry

    Get PDF
    Data-driven decision making (DDD) refers to organizational decision-making practices that emphasize the use of data and statistical analysis instead of relying on human judgment only. Various empirical studies provide evidence for the value of DDD, both on individual decision maker level and the organizational level. Yet, the path from data to value is not always an easy one and various organizational and psychological factors mediate and moderate the translation of data-driven insights into better decisions and, subsequently, effective business actions. The current body of academic literature on DDD lacks prescriptive knowledge on how to successfully employ DDD in complex organizational settings. Against this background, this paper reports on an action design research study aimed at designing and implementing IT artifacts for DDD at one of the largest ship engine manufacturers in the world. Our main contribution is a set of design principles highlighting, besides decision quality, the importance of model comprehensibility, domain knowledge, and actionability of results

    The uses of qualitative data in multimethodology:Developing causal loop diagrams during the coding process

    Get PDF
    In this research note we describe a method for exploring the creation of causal loop diagrams (CLDs) from the coding trees developed through a grounded theory approach and using computer aided qualitative data analysis software (CAQDAS). The theoretical background to the approach is multimethodology, in line with Minger’s description of paradigm crossing and is appropriately situated within the Appreciate and Analyse phases of PSM intervention. The practical use of this method has been explored and three case studies are presented from the domains of organisational change and entrepreneurial studies. The value of this method is twofold; (i) it has the potential to improve dynamic sensibility in the process of qualitative data analysis, and (ii) it can provide a more rigorous approach to developing CLDs in the formation stage of system dynamics modelling. We propose that the further development of this method requires its implementation within CAQDAS packages so that CLD creation, as a precursor to full system dynamics modelling, is contemporaneous with coding and consistent with a bridging strategy of paradigm crossing

    Lost in Translation: Piloting a Novel Framework to Assess the Challenges in Translating Scientific Uncertainty From Empirical Findings to WHO Policy Statements.

    Get PDF
    BACKGROUND:Calls for evidence-informed public health policy, with implicit promises of greater program effectiveness, have intensified recently. The methods to produce such policies are not self-evident, requiring a conciliation of values and norms between policy-makers and evidence producers. In particular, the translation of uncertainty from empirical research findings, particularly issues of statistical variability and generalizability, is a persistent challenge because of the incremental nature of research and the iterative cycle of advancing knowledge and implementation. This paper aims to assess how the concept of uncertainty is considered and acknowledged in World Health Organization (WHO) policy recommendations and guidelines. METHODS:We selected four WHO policy statements published between 2008-2013 regarding maternal and child nutrient supplementation, infant feeding, heat action plans, and malaria control to represent topics with a spectrum of available evidence bases. Each of these four statements was analyzed using a novel framework to assess the treatment of statistical variability and generalizability. RESULTS:WHO currently provides substantial guidance on addressing statistical variability through GRADE (Grading of Recommendations Assessment, Development, and Evaluation) ratings for precision and consistency in their guideline documents. Accordingly, our analysis showed that policy-informing questions were addressed by systematic reviews and representations of statistical variability (eg, with numeric confidence intervals). In contrast, the presentation of contextual or "background" evidence regarding etiology or disease burden showed little consideration for this variability. Moreover, generalizability or "indirectness" was uniformly neglected, with little explicit consideration of study settings or subgroups. CONCLUSION:In this paper, we found that non-uniform treatment of statistical variability and generalizability factors that may contribute to uncertainty regarding recommendations were neglected, including the state of evidence informing background questions (prevalence, mechanisms, or burden or distributions of health problems) and little assessment of generalizability, alternate interventions, and additional outcomes not captured by systematic review. These other factors often form a basis for providing policy recommendations, particularly in the absence of a strong evidence base for intervention effects. Consequently, they should also be subject to stringent and systematic evaluation criteria. We suggest that more effort is needed to systematically acknowledge (1) when evidence is missing, conflicting, or equivocal, (2) what normative considerations were also employed, and (3) how additional evidence may be accrued
    • …
    corecore