3,244 research outputs found

    Search Tool Implementation for Historical Archive

    Get PDF
    Dr. Linda Arnold's archival project "Mexican-American War and the Media" is an underutilized resource. Providing contrasting primary sources on the War, it is the only archive of its kind. In order to make the archive's massive amount of information more accessible to researchers and students, I added search functionality to the site. Several tools were implemented and tested. Perlfect, a Perl-based open-source approach, was determined to be the best option. This report includes an outline of the steps taken to implement the search tool, a user's manual, a developer's manual, and options for future work. The archive may be accessed at http://www.majbill.vt.edu/history/mxamwar/index.htm

    Hand-off Tool Implementation in Post-Anesthesia Care

    Get PDF
    The issues surrounding the safety of patients and improving patient care outcomes in the pre-operative and post-operative environment, is one of ongoing focus and development. Post-operative, hand-off report is one aspect in the continuity of patient care that has affected patient outcomes. Clear, concise, and accurate communication concerning patients and their postoperative status is one way that providers and caregivers may ensure patient safety. There are many various organizations that emphasize the importance of clear, effective communication, in the form of a standardized hand-off tool to maximize patient safety. The purpose of the project was to standardize clear, concise, effective communication from post-anesthesia care. A hand-off tool was developed and implemented to assist in the standardizing process of post-anesthesia communication to intensive care and medical-surgical floors. A survey was used to evaluate the effectiveness and delivery of the hand-off tool. A power point presentation regarding the importance of the hand-off tool, and the results of the survey were shared with the anesthesia providers of the facility

    An Intuitive Automated Modelling Interface for Systems Biology

    Full text link
    We introduce a natural language interface for building stochastic pi calculus models of biological systems. In this language, complex constructs describing biochemical events are built from basic primitives of association, dissociation and transformation. This language thus allows us to model biochemical systems modularly by describing their dynamics in a narrative-style language, while making amendments, refinements and extensions on the models easy. We demonstrate the language on a model of Fc-gamma receptor phosphorylation during phagocytosis. We provide a tool implementation of the translation into a stochastic pi calculus language, Microsoft Research's SPiM

    What Does it Take to Make Discovery a Success?: A Survey of Discovery Tool Adoption, Instruction, and Evaluation Among Academic Libraries

    Get PDF
    Discovery tools have been widely adopted by academic libraries, yet little information exists that connects common practices regarding discovery tool implementation, maintenance, assessment, and staffing with conventions for research and instruction. The authors surveyed heads of reference and instruction departments in research and land-grant university libraries. The survey results revealed common practices with discovery tools among academic libraries. This study also draws connections between operational, instructional, and assessment practices and perceptions that participants have of the success of their discovery tool. Participants who indicated successful implementation of their discovery tool hailed from institutions that made significant commitments to the operations, maintenance, and acceptance of their discovery tool. Participants who indicated an unsuccessful implementation, or who were unsure about the success of their implementation, did not make lasting commitments to the technical maintenance, operations, and acceptance of their discovery tool

    Automatic Verification of Erlang-Style Concurrency

    Full text link
    This paper presents an approach to verify safety properties of Erlang-style, higher-order concurrent programs automatically. Inspired by Core Erlang, we introduce Lambda-Actor, a prototypical functional language with pattern-matching algebraic data types, augmented with process creation and asynchronous message-passing primitives. We formalise an abstract model of Lambda-Actor programs called Actor Communicating System (ACS) which has a natural interpretation as a vector addition system, for which some verification problems are decidable. We give a parametric abstract interpretation framework for Lambda-Actor and use it to build a polytime computable, flow-based, abstract semantics of Lambda-Actor programs, which we then use to bootstrap the ACS construction, thus deriving a more accurate abstract model of the input program. We have constructed Soter, a tool implementation of the verification method, thereby obtaining the first fully-automatic, infinite-state model checker for a core fragment of Erlang. We find that in practice our abstraction technique is accurate enough to verify an interesting range of safety properties. Though the ACS coverability problem is Expspace-complete, Soter can analyse these verification problems surprisingly efficiently.Comment: 12 pages plus appendix, 4 figures, 1 table. The tool is available at http://mjolnir.cs.ox.ac.uk/soter

    Analysis of Timed and Long-Run Objectives for Markov Automata

    Get PDF
    Markov automata (MAs) extend labelled transition systems with random delays and probabilistic branching. Action-labelled transitions are instantaneous and yield a distribution over states, whereas timed transitions impose a random delay governed by an exponential distribution. MAs are thus a nondeterministic variation of continuous-time Markov chains. MAs are compositional and are used to provide a semantics for engineering frameworks such as (dynamic) fault trees, (generalised) stochastic Petri nets, and the Architecture Analysis & Design Language (AADL). This paper considers the quantitative analysis of MAs. We consider three objectives: expected time, long-run average, and timed (interval) reachability. Expected time objectives focus on determining the minimal (or maximal) expected time to reach a set of states. Long-run objectives determine the fraction of time to be in a set of states when considering an infinite time horizon. Timed reachability objectives are about computing the probability to reach a set of states within a given time interval. This paper presents the foundations and details of the algorithms and their correctness proofs. We report on several case studies conducted using a prototypical tool implementation of the algorithms, driven by the MAPA modelling language for efficiently generating MAs.Comment: arXiv admin note: substantial text overlap with arXiv:1305.705

    A Generic Framework for Engineering Graph Canonization Algorithms

    Full text link
    The state-of-the-art tools for practical graph canonization are all based on the individualization-refinement paradigm, and their difference is primarily in the choice of heuristics they include and in the actual tool implementation. It is thus not possible to make a direct comparison of how individual algorithmic ideas affect the performance on different graph classes. We present an algorithmic software framework that facilitates implementation of heuristics as independent extensions to a common core algorithm. It therefore becomes easy to perform a detailed comparison of the performance and behaviour of different algorithmic ideas. Implementations are provided of a range of algorithms for tree traversal, target cell selection, and node invariant, including choices from the literature and new variations. The framework readily supports extraction and visualization of detailed data from separate algorithm executions for subsequent analysis and development of new heuristics. Using collections of different graph classes we investigate the effect of varying the selections of heuristics, often revealing exactly which individual algorithmic choice is responsible for particularly good or bad performance. On several benchmark collections, including a newly proposed class of difficult instances, we additionally find that our implementation performs better than the current state-of-the-art tools
    corecore