118 research outputs found

    Jackknifing estimated weighted least squares

    Get PDF

    Mapping of health technology assessment in selected countries

    Get PDF
    Objectives: The aim of this study was to develop and apply an instrument to map the level of health technology assessment (HTA) development at country level in selected countries. We examined middle-income countries (Argentina, Brazil, India, Indonesia, Malaysia, Mexico, and Russia) and countries well-known for their comprehensive HTA programs (Australia, Canada, and United Kingdom). Methods: A review of relevant key documents regarding the HTA process was performed to develop the instrument which was then reviewed by selected HTAi members and revised. We identified and collected relevant information to map the level of HTA in the selected countries. This was supplemented by information from a structured survey among HTA experts in the selected countries (response rate: 65/385). Results: Mapping of HTA in a country can be done by focusing on the level of institutionalization and the HTA process (identification, priority setting, assessment, appraisal, reporting, dissemination, and implementation in policy and practice). Although HTA is most advanced in industrialized countries, there is a growing community in middle-income countries that uses HTA. For example, Brazil is rapidly developing effective HTA programs. India and Russia are at the very beginning of introducing HTA. The other middle-income countries show intermediate levels of HTA development compared with the reference countries. Conclusions: This study presents a set of indicators for documenting the current level and trends in HTA at country level. The findings can be used as a baseline measurement for future monitoring and evaluation. This will allow a variety of stakeholders to assess the development of HTA in their country, help inform strategies, and justify expenditure for HTA

    Challenging Distributional Models with a Conceptual Network of Philosophical Terms

    Get PDF
    Computational linguistic research on language change through distributional semantic (DS) models has inspired researchers from fields such as philosophy and literary studies, who use these methods for the exploration and comparison of comparatively small datasets traditionally analyzed by close reading. Research on methods for small data is still in early stages and it is not clear which methods achieve the best results. We investigate the possibilities and limitations of using distributional semantic models for analyzing philosophical data by means of a realistic use-case. We provide a ground truth for evaluation created by philosophy experts and a blueprint for using DS models in a sound methodological setup. We compare three methods for creating specialized models from small datasets. Though the models do not perform well enough to directly support philosophers yet, we find that models designed for small data yield promising directions for future work

    Logic models help make sense of complexity in systematic reviews and health technology assessments

    Get PDF
    OBJECTIVE: To describe the development and application of logic model templates for systematic reviews and health technology assessments (HTA) of complex interventions STUDY DESIGN AND SETTING: This study demonstrates the development of a method to conceptualise complexity and make underlying assumptions transparent. Examples from systematic reviews with specific relevance to sub-Saharan Africa (SSA) and other low- and middle-income countries (LMICs) illustrate its usefulness. RESULTS: Two distinct templates are presented: the system-based logic model, describing the system in which the interaction between participants, intervention and context takes place; and the process-orientated logic model, which displays the processes and causal pathways that lead from the intervention to multiple outcomes. CONCLUSION: Logic models can help authors of systematic reviews and HTAs to explicitly address and make sense of complexity, adding value by achieving a better understanding of the interactions between the intervention, its implementation and its multiple outcomes among a given population and context. They thus have the potential to help build systematic review capacity -in SSA and other LMICs - at an individual level, by equipping authors with a tool that facilitates the review process; and at a system-level, by improving communication between producers and potential users of research evidence

    Static Code Verification Through Process Models

    Get PDF
    In this extended abstract, we combine two techniques for program verification: one is Hoare-style static verification, and the other is model checking of state transition systems. We relate the two techniques semantically through the use of a ghost variable. Actions that are performed by the program can be logged into this variable, building an event structure as its value. We require the event structure to grow incrementally by construction, giving it behavior suitable for model checking. Invariants specify a correspondence between the event structure and the program state. The combined power of model checking and static code verification with separation logic based reasoning, gives a new and intuitive way to do program verification. We describe our idea in a tool-agnostic way: we do not give implementation details, nor do we assume that the static verification tool to which our idea might apply is implemented in a particular way

    On Models and Code:A Unified Approach to Support Large-Scale Deductive Program Verification

    Get PDF
    Despite the substantial progress in the area of deductive program verification over the last years, it still remains a challenge to use deductive verification on large-scale industrial applications. In this abstract, I analyse why this is case, and I argue that in order to solve this, we need to soften the border between models and code. This has two important advantages: (1) it would make it easier to reason about high-level behaviour of programs, using deductive verification, and (2) it would allow to reason about incomplete applications during the development process. I discuss how the first steps towards this goal are supported by verification techniques within the VerCors project, and I will sketch the future steps that are necessary to realise this goal
    • …
    corecore