186,591 research outputs found

    Use of Evidence-informed Deliberative Processes by Health Technology Assessment Agencies Around The Globe

    Get PDF
    Background: Evidence-informed deliberative processes (EDPs) were recently introduced to guide health technology assessment (HTA) agencies to improve their processes towards more legitimate decision-making. The EDP framework provides guidance that covers the HTA process, ie, contextual factors, installation of an appraisal committee, selecting health technologies and criteria, assessment, appraisal, and communication and appeal. The aims of this study were to identify the level of use of EDPs by HTA agencies, identify their needs for guidance, and to learn about best practices.Methods: A questionnaire for an online survey was developed based on the EDP framework, consisting of elements that reflect each part of the framework. The survey was sent to members of the International Network of Agencies for Health Technology Assessment (INAHTA). Two weeks following the invitation, a reminder was sent. The data collection took place between September-December 2018. Results: Contact persons from 27 member agencies filled out the survey (response rate: 54%), of which 25 completed all questions. We found that contextual factors to support HTA development and the critical elements regarding conducting and reporting on HTA are overall in place. Respondents indicated that guidance was needed for specific elements related to selecting technologies and criteria, appraisal, and communication and appeal. With regard to best practices, the Canadian Agency for Drugs and Technologies and the National Institute for Health and Care Excellence (NICE, UK) were most often mentioned. Conclusion: This is the first survey among HTA agencies regarding the use of EDPs and provides useful information for further developing a practical guide for HTA agencies around the globe. The results could support HTA agencies in improving their processes towards more legitimate decision-making, as they could serve as a baseline measurement for future monitoring and evaluation

    Business models for sustained ehealth implementation: lessons from two continents

    Get PDF
    There is general consensus that Computers and Information Technology have the potential to enhance health systems applications, and many good examples of such applications exist all over the world. Unfortunately, with respect to eHealth and telemedicine, there is much disillusionment and scepticism. This paper describes two models that were developed separately, but had the same purpose, namely to facilitate a holistic approach to the development and implementation of eHealth solutions. The roadmap of the Centre for eHealth Research (CeHRes roadmap) was developed in the Netherlands, and the Telemedicine Maturity Model (TMMM) was developed in South Africa. The purpose of this paper is to analyse the commonalities and differences of these approaches, and to explore how they can be used to complement each other. The first part of this paper comprises of a comparison of these models in terms of origin, research domain and design principles. Case comparisons are then presented to illustrate how these models complement one another

    Risk assessment and relationship management: practical approach to supply chain risk management

    Get PDF
    The literature suggests the need for incorporating the risk construct into the measurement of organisational performance, although few examples are available as to how this might be undertaken in relation to supply chains. A conceptual framework for the development of performance and risk management within the supply chain is evolved from the literature and empirical evidence. The twin levels of dyadic performance/risk management and the management of a portfolio of performance/risks is addressed, employing Agency Theory to guide the analysis. The empirical evidence relates to the downstream management of dealerships by a large multinational organisation. Propositions are derived from the analysis relating to the issues and mechanisms that may be employed to effectively manage a portfolio of supply chain performance and risks

    Enterprise information security policy assessment - an extended framework for metrics development utilising the goal-question-metric approach

    Get PDF
    Effective enterprise information security policy management requires review and assessment activities to ensure information security policies are aligned with business goals and objectives. As security policy management involves the elements of policy development process and the security policy as output, the context for security policy assessment requires goal-based metrics for these two elements. However, the current security management assessment methods only provide checklist types of assessment that are predefined by industry best practices and do not allow for developing specific goal-based metrics. Utilizing theories drawn from literature, this paper proposes the Enterprise Information Security Policy Assessment approach that expands on the Goal-Question-Metric (GQM) approach. The proposed assessment approach is then applied in a case scenario example to illustrate a practical application. It is shown that the proposed framework addresses the requirement for developing assessment metrics and allows for the concurrent undertaking of process-based and product-based assessment. Recommendations for further research activities include the conduct of empirical research to validate the propositions and the practical application of the proposed assessment approach in case studies to provide opportunities to introduce further enhancements to the approach

    ACon: A learning-based approach to deal with uncertainty in contextual requirements at runtime

    Get PDF
    Context: Runtime uncertainty such as unpredictable operational environment and failure of sensors that gather environmental data is a well-known challenge for adaptive systems. Objective: To execute requirements that depend on context correctly, the system needs up-to-date knowledge about the context relevant to such requirements. Techniques to cope with uncertainty in contextual requirements are currently underrepresented. In this paper we present ACon (Adaptation of Contextual requirements), a data-mining approach to deal with runtime uncertainty affecting contextual requirements. Method: ACon uses feedback loops to maintain up-to-date knowledge about contextual requirements based on current context information in which contextual requirements are valid at runtime. Upon detecting that contextual requirements are affected by runtime uncertainty, ACon analyses and mines contextual data, to (re-)operationalize context and therefore update the information about contextual requirements. Results: We evaluate ACon in an empirical study of an activity scheduling system used by a crew of 4 rowers in a wild and unpredictable environment using a complex monitoring infrastructure. Our study focused on evaluating the data mining part of ACon and analysed the sensor data collected onboard from 46 sensors and 90,748 measurements per sensor. Conclusion: ACon is an important step in dealing with uncertainty affecting contextual requirements at runtime while considering end-user interaction. ACon supports systems in analysing the environment to adapt contextual requirements and complements existing requirements monitoring approaches by keeping the requirements monitoring specification up-to-date. Consequently, it avoids manual analysis that is usually costly in today’s complex system environments.Peer ReviewedPostprint (author's final draft

    Performance measurement : challenges for tomorrow

    Get PDF
    This paper demonstrates that the context within which performance measurement is used is changing. The key questions posed are: Is performance measurement ready for the emerging context? What are the gaps in our knowledge? and Which lines of enquiry do we need to pursue? A literature synthesis conducted by a team of multidisciplinary researchers charts the evolution of the performance-measurement literature and identifies that the literature largely follows the emerging business and global trends. The ensuing discussion introduces the currently emerging and predicted future trends and explores how current knowledge on performance measurement may deal with the emerging context. This results in identification of specific challenges for performance measurement within a holistic systems-based framework. The principle limitation of the paper is that it covers a broad literature base without in-depth analysis of a particular aspect of performance measurement. However, this weakness is also the strength of the paper. What is perhaps most significant is that there is a need for rethinking how we research the field of performance measurement by taking a holistic systems-based approach, recognizing the integrated and concurrent nature of challenges that the practitioners, and consequently the field, face

    The Australian Research Quality Framework: A live experiment in capturing the social, economic, environmental, and cultural returns of publicly funded research

    Get PDF
    Copyright @ 2008 Wiley Periodicals Inc. This is the accepted version of the following article: Donovan, C. (2008), The Australian Research Quality Framework: A live experiment in capturing the social, economic, environmental, and cultural returns of publicly funded research. New Directions for Evaluation, 2008: 47–60, which has been published in final form at http://onlinelibrary.wiley.com/doi/10.1002/ev.260/abstract.The author regards development of Australia's ill-fated Research Quality Framework (RQF) as a “live experiment” in determining the most appropriate approach to evaluating the extra-academic returns, or “impact,” of a nation's publicly funded research. The RQF was at the forefront of an international movement toward richer qualitative, contextual approaches that aimed to gauge the wider economic, social, environmental, and cultural benefits of research. Its construction and implementation sent mixed messages and created confusion about what impact is, and how it is best measured, to the extent that this bold live experiment did not come to fruition

    Randomised controlled trials of complex interventions and large-scale transformation of services

    Get PDF
    Complex interventions and large-scale transformations of services are necessary to meet the health-care challenges of the 21st century. However, the evaluation of these types of interventions is challenging and requires methodological development. Innovations such as cluster randomised controlled trials, stepped-wedge designs, and non-randomised evaluations provide options to meet the needs of decision-makers. Adoption of theory and logic models can help clarify causal assumptions, and process evaluation can assist in understanding delivery in context. Issues of implementation must also be considered throughout intervention design and evaluation to ensure that results can be scaled for population benefit. Relevance requires evaluations conducted under real-world conditions, which in turn requires a pragmatic attitude to design. The increasing complexity of interventions and evaluations threatens the ability of researchers to meet the needs of decision-makers for rapid results. Improvements in efficiency are thus crucial, with electronic health records offering significant potential
    • 

    corecore