46,536 research outputs found

    Designing an automated clinical decision support system to match clinical practice guidelines for opioid therapy for chronic pain

    Get PDF
    Abstract Background Opioid prescribing for chronic pain is common and controversial, but recommended clinical practices are followed inconsistently in many clinical settings. Strategies for increasing adherence to clinical practice guideline recommendations are needed to increase effectiveness and reduce negative consequences of opioid prescribing in chronic pain patients. Methods Here we describe the process and outcomes of a project to operationalize the 2003 VA/DOD Clinical Practice Guideline for Opioid Therapy for Chronic Non-Cancer Pain into a computerized decision support system (DSS) to encourage good opioid prescribing practices during primary care visits. We based the DSS on the existing ATHENA-DSS. We used an iterative process of design, testing, and revision of the DSS by a diverse team including guideline authors, medical informatics experts, clinical content experts, and end-users to convert the written clinical practice guideline into a computable algorithm to generate patient-specific recommendations for care based upon existing information in the electronic medical record (EMR), and a set of clinical tools. Results The iterative revision process identified numerous and varied problems with the initially designed system despite diverse expert participation in the design process. The process of operationalizing the guideline identified areas in which the guideline was vague, left decisions to clinical judgment, or required clarification of detail to insure safe clinical implementation. The revisions led to workable solutions to problems, defined the limits of the DSS and its utility in clinical practice, improved integration into clinical workflow, and improved the clarity and accuracy of system recommendations and tools. Conclusions Use of this iterative process led to development of a multifunctional DSS that met the approval of the clinical practice guideline authors, content experts, and clinicians involved in testing. The process and experiences described provide a model for development of other DSSs that translate written guidelines into actionable, real-time clinical recommendations.http://deepblue.lib.umich.edu/bitstream/2027.42/78267/1/1748-5908-5-26.xmlhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/2/1748-5908-5-26.pdfhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/3/1748-5908-5-26-S3.TIFFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/4/1748-5908-5-26-S2.TIFFhttp://deepblue.lib.umich.edu/bitstream/2027.42/78267/5/1748-5908-5-26-S1.TIFFPeer Reviewe

    Towards Systemic Evaluation

    Get PDF
    Problems of conventional evaluation models can be understood as an impoverished ‘conversation’ between realities (of non-linearity, indeterminate attributes, and ever-changing context), and models of evaluating such realities. Meanwhile, ideas of systems thinking and complexity science—grouped here under the acronym STCS—struggle to gain currency in the big ‘E’ world of institutionalized evaluation. Four evaluation practitioners familiar with evaluation tools associated with STCS offer perspectives on issues regarding mainstream uptake of STCS in the big ‘E’ world. The perspectives collectively suggest three features of practicing systemic evaluation: (i) developing value in conversing between bounded values (evaluations) and unbounded reality (evaluand), with humility; (ii) developing response-ability with evaluand stakeholders based on reflexivity, with empathy; and (iii) developing adaptive rather than mere contingent use(fulness) of STCS ‘tools’ as part of evaluation praxis, with inevitable fallibility and an orientation towards bricolage (adaptive use). The features hint towards systemic evaluation as core to a reconfigured notion of developmental evaluation

    Fast matrix computations for pair-wise and column-wise commute times and Katz scores

    Full text link
    We first explore methods for approximating the commute time and Katz score between a pair of nodes. These methods are based on the approach of matrices, moments, and quadrature developed in the numerical linear algebra community. They rely on the Lanczos process and provide upper and lower bounds on an estimate of the pair-wise scores. We also explore methods to approximate the commute times and Katz scores from a node to all other nodes in the graph. Here, our approach for the commute times is based on a variation of the conjugate gradient algorithm, and it provides an estimate of all the diagonals of the inverse of a matrix. Our technique for the Katz scores is based on exploiting an empirical localization property of the Katz matrix. We adopt algorithms used for personalized PageRank computing to these Katz scores and theoretically show that this approach is convergent. We evaluate these methods on 17 real world graphs ranging in size from 1000 to 1,000,000 nodes. Our results show that our pair-wise commute time method and column-wise Katz algorithm both have attractive theoretical properties and empirical performance.Comment: 35 pages, journal version of http://dx.doi.org/10.1007/978-3-642-18009-5_13 which has been submitted for publication. Please see http://www.cs.purdue.edu/homes/dgleich/publications/2011/codes/fast-katz/ for supplemental code

    Approaches to the teaching of design : an engineering subject centre guide

    Get PDF
    This booklet seeks to provide a resource for all those with an interest in design, and the education and training of engineering students to carry out the design process. A brief description of the internal and external requirements for design in the engineering curriculum is followed by a review of different approaches to design teaching currently employed in engineering schools and universities worldwide. Suggestions for further reading about each approach and a reference section are also provided

    Towards a Flexible Deep Learning Method for Automatic Detection of Clinically Relevant Multi-Modal Events in the Polysomnogram

    Full text link
    Much attention has been given to automatic sleep staging algorithms in past years, but the detection of discrete events in sleep studies is also crucial for precise characterization of sleep patterns and possible diagnosis of sleep disorders. We propose here a deep learning model for automatic detection and annotation of arousals and leg movements. Both of these are commonly seen during normal sleep, while an excessive amount of either is linked to disrupted sleep patterns, excessive daytime sleepiness impacting quality of life, and various sleep disorders. Our model was trained on 1,485 subjects and tested on 1,000 separate recordings of sleep. We tested two different experimental setups and found optimal arousal detection was attained by including a recurrent neural network module in our default model with a dynamic default event window (F1 = 0.75), while optimal leg movement detection was attained using a static event window (F1 = 0.65). Our work show promise while still allowing for improvements. Specifically, future research will explore the proposed model as a general-purpose sleep analysis model.Comment: Accepted for publication in 41st International Engineering in Medicine and Biology Conference (EMBC), July 23-27, 201

    Foundational principles for large scale inference: Illustrations through correlation mining

    Full text link
    When can reliable inference be drawn in the "Big Data" context? This paper presents a framework for answering this fundamental question in the context of correlation mining, with implications for general large scale inference. In large scale data applications like genomics, connectomics, and eco-informatics the dataset is often variable-rich but sample-starved: a regime where the number nn of acquired samples (statistical replicates) is far fewer than the number pp of observed variables (genes, neurons, voxels, or chemical constituents). Much of recent work has focused on understanding the computational complexity of proposed methods for "Big Data." Sample complexity however has received relatively less attention, especially in the setting when the sample size nn is fixed, and the dimension pp grows without bound. To address this gap, we develop a unified statistical framework that explicitly quantifies the sample complexity of various inferential tasks. Sampling regimes can be divided into several categories: 1) the classical asymptotic regime where the variable dimension is fixed and the sample size goes to infinity; 2) the mixed asymptotic regime where both variable dimension and sample size go to infinity at comparable rates; 3) the purely high dimensional asymptotic regime where the variable dimension goes to infinity and the sample size is fixed. Each regime has its niche but only the latter regime applies to exa-scale data dimension. We illustrate this high dimensional framework for the problem of correlation mining, where it is the matrix of pairwise and partial correlations among the variables that are of interest. We demonstrate various regimes of correlation mining based on the unifying perspective of high dimensional learning rates and sample complexity for different structured covariance models and different inference tasks

    Integrative Use of Information Extraction, Semantic Matchmaking and Adaptive Coupling Techniques in Support of Distributed Information Processing and Decision-Making

    No full text
    In order to press maximal cognitive benefit from their social, technological and informational environments, military coalitions need to understand how best to exploit available information assets as well as how best to organize their socially-distributed information processing activities. The International Technology Alliance (ITA) program is beginning to address the challenges associated with enhanced cognition in military coalition environments by integrating a variety of research and development efforts. In particular, research in one component of the ITA ('Project 4: Shared Understanding and Information Exploitation') is seeking to develop capabilities that enable military coalitions to better exploit and distribute networked information assets in the service of collective cognitive outcomes (e.g. improved decision-making). In this paper, we provide an overview of the various research activities in Project 4. We also show how these research activities complement one another in terms of supporting coalition-based collective cognition

    New venture internationalisation and the cluster life cycle: insights from Ireland’s indigenous software industry

    Get PDF
    The internationalization of new and small firms has been a long-standing concern of researchers in international business (Coviello and McAuley, 1999; Ruzzier et al., 2006). This topic has been re-invigorated over the last decade by the burgeoning literature on so-called ‘born globals’ (BG) or ‘international new ventures’ (INV) – businesses that confound the expectations of traditional theory by being active internationally at, or soon after, inception (Aspelund et al., 2007; Bell, 1995; Rialp et al., 2005). Until quite recently, this literature had not really considered how the home regional environment of a new venture might influence its internationalization behaviour. However, a handful of recent studies have shown that being founded in a geographic industry ‘cluster’ can positively influence the likelihood of a new venture internationalizing (e.g., Fernhaber et al., 2008; Libaers and Meyer, 2011). This chapter seeks to build on these recent contributions by further probing the relationship between clusters and new venture internationalization. Specifically, taking inspiration from recent work in the thematic research stream on clusters (which spans the fields of economic geography, regional studies and industrial dynamics), the chapter explores how the emergence and internationalization of new ventures might be affected by the ‘cluster life cycle’ context within which they are founded. This issue is examined through a revelatory longitudinal case study of Ireland’s indigenous software cluster. The study investigates the origins and internationalization behaviour of ‘leading’ Irish software ventures but, in contrast to many existing studies, it seeks to understand these firms within the context of the Irish software cluster’s emergence and evolution through a number of ‘life-cycle’ stages
    • 

    corecore