2,426 research outputs found

    A safety analysis approach to clinical workflows : application and evaluation

    Get PDF
    Clinical workflows are safety critical workflows as they have the potential to cause harm or death to patients. Their safety needs to be considered as early as possible in the development process. Effective safety analysis methods are required to ensure the safety of these high-risk workflows, because errors that may happen through routine workflow could propagate within the workflow to result in harmful failures of the system’s output. This paper shows how to apply an approach for safety analysis of clinic al workflows to analyse the safety of the workflow within a radiology department and evaluates the approach in terms of usability and benefits. The outcomes of using this approach include identification of the root causes of hazardous workflow failures that may put patients’ lives at risk. We show that the approach is applicable to this area of healthcare and is able to present added value through the detailed information on possible failures, of both their causes and effects; therefore, it has the potential to improve the safety of radiology and other clinical workflows

    A Universal Machine for Biform Theory Graphs

    Full text link
    Broadly speaking, there are two kinds of semantics-aware assistant systems for mathematics: proof assistants express the semantic in logic and emphasize deduction, and computer algebra systems express the semantics in programming languages and emphasize computation. Combining the complementary strengths of both approaches while mending their complementary weaknesses has been an important goal of the mechanized mathematics community for some time. We pick up on the idea of biform theories and interpret it in the MMTt/OMDoc framework which introduced the foundations-as-theories approach, and can thus represent both logics and programming languages as theories. This yields a formal, modular framework of biform theory graphs which mixes specifications and implementations sharing the module system and typing information. We present automated knowledge management work flows that interface to existing specification/programming tools and enable an OpenMath Machine, that operationalizes biform theories, evaluating expressions by exhaustively applying the implementations of the respective operators. We evaluate the new biform framework by adding implementations to the OpenMath standard content dictionaries.Comment: Conferences on Intelligent Computer Mathematics, CICM 2013 The final publication is available at http://link.springer.com

    Comparing Social Science and Computer Science Workflow Processes for Studying Group Interactions

    Get PDF
    In this article, a team of authors from the Geeks and Groupies workshop, in Leiden, the Netherlands, compare prototypical approaches to studying group interaction in social science and computer science disciplines, which we call workflows. To help social and computer science scholars understand and manage these differences, we organize workflow into three major stages: research design, data collection, and analysis. For each stage, we offer a brief overview on how scholars from each discipline work. We then compare those approaches and identify potential synergies and challenges. We conclude our article by discussing potential directions for more integrated and mutually beneficial collaboration that go beyond the producer–consumer model

    The views of health guideline developers on the use of automation in health evidence synthesis

    Get PDF
    BACKGROUND: The increasingly rapid rate of evidence publication has made it difficult for evidence synthesis-systematic reviews and health guidelines-to be continually kept up to date. One proposed solution for this is the use of automation in health evidence synthesis. Guideline developers are key gatekeepers in the acceptance and use of evidence, and therefore, their opinions on the potential use of automation are crucial. METHODS: The objective of this study was to analyze the attitudes of guideline developers towards the use of automation in health evidence synthesis. The Diffusion of Innovations framework was chosen as an initial analytical framework because it encapsulates some of the core issues which are thought to affect the adoption of new innovations in practice. This well-established theory posits five dimensions which affect the adoption of novel technologies: Relative Advantage, Compatibility, Complexity, Trialability, and Observability. Eighteen interviews were conducted with individuals who were currently working, or had previously worked, in guideline development. After transcription, a multiphase mixed deductive and grounded approach was used to analyze the data. First, transcripts were coded with a deductive approach using Rogers' Diffusion of Innovation as the top-level themes. Second, sub-themes within the framework were identified using a grounded approach. RESULTS: Participants were consistently most concerned with the extent to which an innovation is in line with current values and practices (i.e., Compatibility in the Diffusion of Innovations framework). Participants were also concerned with Relative Advantage and Observability, which were discussed in approximately equal amounts. For the latter, participants expressed a desire for transparency in the methodology of automation software. Participants were noticeably less interested in Complexity and Trialability, which were discussed infrequently. These results were reasonably consistent across all participants. CONCLUSIONS: If machine learning and other automation technologies are to be used more widely and to their full potential in systematic reviews and guideline development, it is crucial to ensure new technologies are in line with current values and practice. It will also be important to maximize the transparency of the methods of these technologies to address the concerns of guideline developers

    The Adoption and Effectiveness of Automation in Health Evidence Synthesis

    Get PDF
    Background: Health systems worldwide are often informed by evidence-based guidelines which in turn rely heavily on systematic reviews. Systematic reviews are currently hindered by the increasing volume of new research and by its variable quality. Automation has potential to alleviate this problem but is not widely used in health evidence synthesis. This thesis sought to address the following: why is automation adopted (or not), and what effects does it have when it is put into use? / Methods: Roger’s Diffusion of Innovations theory, as a well-established and widely used framework, informed the study design and analysis. Adoption barriers and facilitators were explored through a thematic analysis of guideline developers’ opinions towards automation, and by mapping the adoption journey of a machine learning (ML) tool among Cochrane Information Specialists (CISs). A randomised trial of ML assistance in Risk of Bias (RoB) assessments and a cost-effectiveness analysis of a semi-automated workflow in the maintenance of a living evidence map each evaluated the effects of automation in practice. / Results: Adoption decisions are most strongly informed by the professional cultural expectations of health evidence synthesis. The stringent expectations of systematic reviewers and their users must be met before any other characteristic of an automation technology is considered by potential adopters. Ease-of-use increases in importance as a tool becomes more diffused across a population. Results of the randomised trial showed that ML-assisted RoB assessments were non-inferior to assessments completed entirely by human researcher effort. The cost-effectiveness analysis showed that a semi-automated workflow identified more relevant studies than the manual workflow and was less costly. / Conclusions: Automation can have substantial benefits when integrated into health evidence workflows. Wider adoption of automation tools will be facilitated by ensuring they are aligned with professional values of the field and limited in technical complexity

    Preparing for the spread of patient-reported outcomes (PROs) data collection from primary care to community pharmacy: stakeholder insights

    Get PDF
    Medication non-adherence is a significant public health problem. Patient-reported outcomes (PROs) offer a rich data source to facilitate resolution of medication non-adherence. PatientTocâ„¢ is an electronic PRO data collection software originally implemented at primary care practices in California, United States (US). Currently, the use of standardized PRO data collection systems in US community pharmacies is limited. Thus, we are conducting a two-phase evaluation of the spread and scale of PatientTocâ„¢ to US Midwestern community pharmacies. This report focuses on the first phase of the evaluation. The objective of this phase was to prepare for implementation of PatientTocâ„¢ in community pharmacies by conducting a pre-implementation developmental formative evaluation to (1) identify potential barriers, facilitators, and actionable recommendations to PatientTocâ„¢ implementation and (2) create a draft implementation toolkit

    Digital fabrication: From tool to a way of thinking

    Get PDF
    The digital design appears as an integrated process from conceptualization to materialization and fabrication. The main question is: How the new medium changed the workflow in architecture from a linear model to a cyclic model and the role of new materialization as a form of design thinking? This paper is part of a study that investigates the architectural design process and starts from the premise that we should expand the study of design methods to include other approaches. It considers digital fabrication not as a tool, but as an integrated strategy in collaborative digital processes that can allow a better communication along the design process. It presents the development of design methodologies in order to contribute to a greater understanding of the methodology for design projects with caution to the fact that each one reflects the period in which it was developed.info:eu-repo/semantics/publishedVersio

    Enabling collaborative numerical modeling in earth sciences using knowledge infrastructure

    Get PDF
    Knowledge Infrastructure is an intellectual framework for creating, sharing, and distributing knowledge. In this paper, we use Knowledge Infrastructure to address common barriers to entry to numerical modeling in Earth sciences: computational modeling education, replicating published model results, and reusing published models to extend research. We outline six critical functional requirements: 1) workflows designed for new users; 2) a community-supported collaborative web platform; 3) distributed data storage; 4) a software environment; 5) a personalized cloud-based high-performance computing platform; and 6) a standardized open source modeling framework. Our methods meet these functional requirements by providing three interactive computational narratives for hands-on, problem-based research demonstrating how to use Landlab on HydroShare. Landlab is an open-source toolkit for building, coupling, and exploring two-dimensional numerical models. HydroShare is an online collaborative environment for the sharing of data and models. We describe the methods we are using to accelerate knowledge development by providing a suite of modular and interoperable process components that allows students, domain experts, collaborators, researchers, and sponsors to learn by exploring shared data and modeling resources. The system is designed to support uses on the continuum from fully-developed modeling applications to prototyping research software tools
    • …
    corecore