90 research outputs found

    Estelle Brodman and the first generation of library automation

    Get PDF
    OBJECTIVE: The purpose of this paper is to examine the contributions of Estelle Brodman, PhD, to the early application of computing technologies in health sciences libraries. METHODS: A review of the literature, oral histories, and materials contained in the archives of the Bernard Becker Medical Library at the Washington University School of Medicine was conducted. RESULTS: While the early computing technologies were not well suited to library applications, their exciting potential was recognized by visionaries like Dr. Brodman. The effective use of these technologies was made possible by creative and innovative projects and programs. The impact of these early efforts continues to resonate through library services and operations. CONCLUSIONS: Computing technologies have transformed libraries. Dr. Brodman's leadership in the early development and application of these technologies provided significant benefits to the health sciences library community

    Electronic health record: integrating evidence-based information at the point of clinical decision making

    Get PDF
    The authors created two tools to achieve the goals of providing physicians with a way to review alternative diagnoses and improving access to relevant evidence-based library resources without disrupting established workflows. The “diagnostic decision support tool” lifted terms from standard, coded fields in the electronic health record and sent them to Isabel, which produced a list of possible diagnoses. The physicians chose their diagnoses and were presented with the “knowledge page,” a collection of evidence-based library resources. Each resource was automatically populated with search results based on the chosen diagnosis. Physicians responded positively to the “knowledge page.

    759–5 Use of an Interactive Electronic Whiteboard to Teach Clinical Cardiology Decision Analysis to Medical Students

    Get PDF
    We used innovative state-of-the-art computer and collaboration technologies to teach first-year medical students an analytic methodology to solve difficult clinical cardiology problems to make informed medical decisions. Clinical examples included the decision to administer thrombolytic therapy considering the risk of hemorrhagic stroke, and activity recommendations for athletes at risk for sudden death. Students received instruction on the decision-analytic approach which integrates pathophysiology, treatment efficacy, diagnostic test interpretation, health outcomes, patient preferences, and cost-effectiveness into a decision-analytic model.The traditional environment of a small group and blackboard was significantly enhanced by using an electronic whiteboard, the Xerox LiveBoardℱ. The LiveBoard features an 80486-based personal computer, large (3’×4’) display, and wireless pens for input. It allowed the integration of decision-analytic software, statistical software, digital slides, and additional media. We developed TIDAL (Team Interactive Decision Analysis in the Large-screen environment), a software package to interactively construct decision trees, calculate expected utilities, and perform one- and two-way sensitivity analyses using pen and gesture inputs. The Live Board also allowed the novel incorporation of Gambler, a utility assessment program obtained from the New England Medical Center. Gambler was used to obtain utilities for outcomes such as non-disabling hemorrhagic stroke. The interactive nature of the LiveBoard allowed real-time decision model development by the class, followed by instantaneous calculation of expected utilities and sensitivity analyses. The multimedia aspect and interactivity were conducive to extensive class participation.Ten out of eleven students wanted decision-analytic software available for use during their clinical years and all students would recommend the course to next year's students. We plan to experiment with the electronic collaboration features of this technology and allow groups separated by time or space to collaborate on decisions and explore the models created

    LSST: from Science Drivers to Reference Design and Anticipated Data Products

    Get PDF
    (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2^2 field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5σ\sigma point-source depth in a single visit in rr will be ∌24.5\sim 24.5 (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg2^2 with ÎŽ<+34.5∘\delta<+34.5^\circ, and will be imaged multiple times in six bands, ugrizyugrizy, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg2^2 region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to r∌27.5r\sim27.5. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures available from https://www.lsst.org/overvie

    3D inkjet printing of tablets exploiting bespoke complex geometries for controlled and tuneable drug release

    Get PDF
    A hot melt 3D inkjet printing method with the potential to manufacture formulations in complex and adaptable geometries for the controlled loading and release of medicines is presented. This first use of a precisely controlled solvent free inkjet printing to produce drug loaded solid dosage forms is demonstrated using a naturally derived FDA approved material (beeswax) as the drug carrier and fenofibrate as the drug. Tablets with bespoke geometries (honeycomb architecture) were fabricated. The honeycomb architecture was modified by control of the honeycomb cell size, and hence surface area to enable control of drug release profiles without the need to alter the formulation. Analysis of the formed tablets showed the drug to be evenly distributed within the beeswax at the bulk scale with evidence of some localization at the micron scale. An analytical model utilizing a Fickian description of diffusion was developed to allow the prediction of drug release. A comparison of experimental and predicted drug release data revealed that in addition to surface area, other factors such as the cell diameter in the case of the honeycomb geometry and material wettability must be considered in practical dosage form design. This information when combined with the range of achievable geometries could allow the bespoke production of optimized personalised medicines for a variety of delivery vehicles in addition to tablets, such as medical devices for example

    A framework for the development of a global standardised marine taxon reference image database (SMarTaR-ID) to support image-based analyses

    Get PDF
    Video and image data are regularly used in the field of benthic ecology to document biodiversity. However, their use is subject to a number of challenges, principally the identification of taxa within the images without associated physical specimens. The challenge of applying traditional taxonomic keys to the identification of fauna from images has led to the development of personal, group, or institution level reference image catalogues of operational taxonomic units (OTUs) or morphospecies. Lack of standardisation among these reference catalogues has led to problems with observer bias and the inability to combine datasets across studies. In addition, lack of a common reference standard is stifling efforts in the application of artificial intelligence to taxon identification. Using the North Atlantic deep sea as a case study, we propose a database structure to facilitate standardisation of morphospecies image catalogues between research groups and support future use in multiple front-end applications. We also propose a framework for coordination of international efforts to develop reference guides for the identification of marine species from images. The proposed structure maps to the Darwin Core standard to allow integration with existing databases. We suggest a management framework where high-level taxonomic groups are curated by a regional team, consisting of both end users and taxonomic experts. We identify a mechanism by which overall quality of data within a common reference guide could be raised over the next decade. Finally, we discuss the role of a common reference standard in advancing marine ecology and supporting sustainable use of this ecosystem
    • 

    corecore