10 research outputs found

    A web-based library consult service for evidence-based medicine: Technical development

    Get PDF
    BACKGROUND: Incorporating evidence based medicine (EBM) into clinical practice requires clinicians to learn to efficiently gain access to clinical evidence and effectively appraise its validity. Even using current electronic systems, selecting literature-based data to solve a single patient-related problem can require more time than practicing physicians or residents can spare. Clinical librarians, as informationists, are uniquely suited to assist physicians in this endeavor. RESULTS: To improve support for evidence-based practice, we have developed a web-based EBM library consult service application (LCS). Librarians use the LCS system to provide full text evidence-based literature with critical appraisal in response to a clinical question asked by a remote physician. LCS uses an entirely Free/Open Source Software platform and will be released under a Free Software license. In the first year of the LCS project, the software was successfully developed and a reference implementation put into active use. Two years of evaluation of the clinical, educational, and attitudinal impact on physician-users and librarian staff are underway, and expected to lead to refinement and wide dissemination of the system. CONCLUSION: A web-based EBM library consult model may provide a useful way for informationists to assist clinicians, and is feasible to implement

    Toward an Open-Access Global Database for Mapping, Control, and Surveillance of Neglected Tropical Diseases

    Get PDF
    There is growing interest in the scientific community, health ministries, and other organizations to control and eventually eliminate neglected tropical diseases (NTDs). Control efforts require reliable maps of NTD distribution estimated from appropriate models and survey data on the number of infected people among those examined at a given location. This kind of data is often available in the literature as part of epidemiological studies. However, an open-access database compiling location-specific survey data does not yet exist. We address this problem through a systematic literature review, along with contacting ministries of health, and research institutions to obtain disease data, including details on diagnostic techniques, demographic characteristics of the surveyed individuals, and geographical coordinates. All data were entered into a database which is freely accessible via the Internet (http://www.gntd.org). In contrast to similar efforts of the Global Atlas of Helminth Infections (GAHI) project, the survey data are not only displayed in form of maps but all information can be browsed, based on different search criteria, and downloaded as Excel files for further analyses. At the beginning of 2011, the database included over 12,000 survey locations for schistosomiasis across Africa, and it is continuously updated to cover other NTDs globally

    MySQL: Language Reference

    No full text

    MySQL: Administrator's Guide

    No full text

    Indico: A Collaboration Hub

    No full text
    Since 2009, the development of Indico has focused on usability, performance and new features, especially the ones related to meeting collaboration. Usability studies have resulted in the biggest change Indico has experienced up to now, a new web layout that makes user experience better. Performance improvements were also a key goal since 2010; the main features of Indico have been optimized remarkably. Along with usability and performance, new features have been added to Indico such as webchat integration, video services bookings, webcast and recording requests, designed to really reinforce Indico position as the main hub for all CERN collaboration services, and many others which aim is to complete the conference lifecycle management. Indico development is also moving towards a broader collaboration where other institutes, hosting their own Indico instance, can contribute to the project in order make it a better and more complete tool

    Spectroradiometer data structuring, pre‐processing and analysis – an IT based approach

    Full text link
    Hyperspectral data collection results in huge datasets that need pre-processing prior to analysis. A review of the pre-processing techniques identified repetitive procedures with consequently a high potential for automation. Data from different hyperspectral field studies were collected and subsequently used as test sets for the described system. A relational database was utilized to store hyperspectral data in a structured way. Software was written to provide a graphical user interface to the database, pre-processing and analysis functionality. The resulting system provides excellent services in terms of organised data storage, easy data retrieval and efficient pre-processing. It is suggested that the use of such a system can improve the productivity of researchers significantly
    corecore