50,668 research outputs found

    The Virtual International Stroke Trials Archive

    Get PDF
    BACKGROUND AND PURPOSE: Stroke has global importance and it causes an increasing amount of human suffering and economic burden, but its management is far from optimal. The unsuccessful outcome of several research programs highlights the need for reliable data on which to plan future clinical trials. The Virtual International Stroke Trials Archive aims to aid the planning of clinical trials by collating and providing access to a rich resource of patient data to perform exploratory analyses. METHODS: Data were contributed by the principal investigators of numerous trials from the past 16 years. These data have been centrally collated and are available for anonymized analysis and hypothesis testing. RESULTS: Currently, the Virtual International Stroke Trials Archive contains 21 trials. There are data on \u3e15,000 patients with both ischemic and hemorrhagic stroke. Ages range between 18 and 103 years, with a mean age of 69+/-12 years. Outcome measures include the Barthel Index, Scandinavian Stroke Scale, National Institutes of Health Stroke Scale, Orgogozo Scale, and modified Rankin Scale. Medical history and onset-to-treatment time are readily available, and computed tomography lesion data are available for selected trials. CONCLUSIONS: This resource has the potential to influence clinical trial design and implementation through data analyses that inform planning

    How Much of the Web Is Archived?

    Full text link
    Although the Internet Archive's Wayback Machine is the largest and most well-known web archive, there have been a number of public web archives that have emerged in the last several years. With varying resources, audiences and collection development policies, these archives have varying levels of overlap with each other. While individual archives can be measured in terms of number of URIs, number of copies per URI, and intersection with other archives, to date there has been no answer to the question "How much of the Web is archived?" We study the question by approximating the Web using sample URIs from DMOZ, Delicious, Bitly, and search engine indexes; and, counting the number of copies of the sample URIs exist in various public web archives. Each sample set provides its own bias. The results from our sample sets indicate that range from 35%-90% of the Web has at least one archived copy, 17%-49% has between 2-5 copies, 1%-8% has 6-10 copies, and 8%-63% has more than 10 copies in public web archives. The number of URI copies varies as a function of time, but no more than 31.3% of URIs are archived more than once per month.Comment: This is the long version of the short paper by the same title published at JCDL'11. 10 pages, 5 figures, 7 tables. Version 2 includes minor typographical correction

    BlogForever D5.1: Design and Specification of Case Studies

    Get PDF
    This document presents the specification and design of six case studies for testing the BlogForever platform implementation process. The report explains the data collection plan where users of the repository will provide usability feedback through questionnaires as well as details of scalability analysis through the creation of specific log files analytics. The case studies will investigate the sustainability of the platform, that it meets potential users’ needs and that is has an important long term impact

    mSpace meets EPrints: a Case Study in Creating Dynamic Digital Collections

    No full text
    In this case study we look at issues involved in (a) generating dynamic digital libraries that are on a particular topic but span heterogeneous collections at distinct sites, (b) supplementing the artefacts in that collection with additional information available either from databases at the artefact's home or from the Web at large, and (c) providing an interaction paradigm that will support effective exploration of this new resource. We describe how we used two available frameworks, mSpace and EPrints to support this kind of collection building. The result of the study is a set of recommendations to improve the connectivity of remote resources both to one another and to related Web resources, and that will also reduce problems like co-referencing in order to enable the creation of new collections on demand

    JISC Preservation of Web Resources (PoWR) Handbook

    Get PDF
    Handbook of Web Preservation produced by the JISC-PoWR project which ran from April to November 2008. The handbook specifically addresses digital preservation issues that are relevant to the UK HE/FE web management community”. The project was undertaken jointly by UKOLN at the University of Bath and ULCC Digital Archives department

    Astronomical Site Selection for Turkey Using GIS Techniques

    Get PDF
    A site selection of potential observatory locations in Turkey have been carried out by using Multi-Criteria Decision Analysis (MCDA) coupled with Geographical Information Systems (GIS) and satellite imagery which in turn reduced cost and time and increased the accuracy of the final outcome. The layers of cloud cover, digital elevation model, artificial lights, precipitable water vapor, aerosol optical thickness and wind speed were studied in the GIS system. In conclusion of MCDA, the most suitable regions were found to be located in a strip crossing from southwest to northeast including also a diverted region in southeast of Turkey. These regions are thus our prime candidate locations for future on-site testing. In addition to this major outcome, this study has also been applied to locations of major observatories sites. Since no goal is set for \textit{the best}, the results of this study is limited with a list of positions. Therefore, the list has to be further confirmed with on-site tests. A national funding has been awarded to produce a prototype of an on-site test unit (to measure both astronomical and meteorological parameters) which might be used in this list of locations.Comment: 17 pages, 10 figures, accepted by Experimental Astronom

    MedlinePlus??: The National Library of Medicine?? Brings Quality Information to Health Consumers

    Get PDF
    The National Library of Medicine???s (NLM??) MedlinePlus?? is a high-quality gateway to consumer health information from NLM, the National Institutes of Health (NIH), and other authoritative organizations. For decades, NLM has been a leader in indexing, organizing, and distributing health information to health professionals. In creating MedlinePlus, NLM uses years of accumulated expertise and technical knowledge to produce an authoritative, reliable consumer health Web site. This article describes the development of MedlinePlus???its quality control processes, the integration of NLM and NIH information, NLM???s relationship to other institutions, the technical and staffing infrastructures, the use of feedback for quality improvement, and future plans.published or submitted for publicatio

    Appraisal and the Future of Archives in the Digital Era

    Get PDF
    Discussion of the implications of new technologies, changing public policies, and transformation of culture for how archivists practice and think about appraisal
    • 

    corecore