244 research outputs found

    ViPR: an open bioinformatics database and analysis resource for virology research

    Get PDF
    The Virus Pathogen Database and Analysis Resource (ViPR, www.ViPRbrc.org) is an integrated repository of data and analysis tools for multiple virus families, supported by the National Institute of Allergy and Infectious Diseases (NIAID) Bioinformatics Resource Centers (BRC) program. ViPR contains information for human pathogenic viruses belonging to the Arenaviridae, Bunyaviridae, Caliciviridae, Coronaviridae, Flaviviridae, Filoviridae, Hepeviridae, Herpesviridae, Paramyxoviridae, Picornaviridae, Poxviridae, Reoviridae, Rhabdoviridae and Togaviridae families, with plans to support additional virus families in the future. ViPR captures various types of information, including sequence records, gene and protein annotations, 3D protein structures, immune epitope locations, clinical and surveillance metadata and novel data derived from comparative genomics analysis. Analytical and visualization tools for metadata-driven statistical sequence analysis, multiple sequence alignment, phylogenetic tree construction, BLAST comparison and sequence variation determination are also provided. Data filtering and analysis workflows can be combined and the results saved in personal ‘Workbenches’ for future use. ViPR tools and data are available without charge as a service to the virology research community to help facilitate the development of diagnostics, prophylactics and therapeutics for priority pathogens and other viruses

    Automatic de-identification of textual documents in the electronic health record: a review of recent research

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>In the United States, the Health Insurance Portability and Accountability Act (HIPAA) protects the confidentiality of patient data and requires the informed consent of the patient and approval of the Internal Review Board to use data for research purposes, but these requirements can be waived if data is de-identified. For clinical data to be considered de-identified, the HIPAA "Safe Harbor" technique requires 18 data elements (called PHI: Protected Health Information) to be removed. The de-identification of narrative text documents is often realized manually, and requires significant resources. Well aware of these issues, several authors have investigated automated de-identification of narrative text documents from the electronic health record, and a review of recent research in this domain is presented here.</p> <p>Methods</p> <p>This review focuses on recently published research (after 1995), and includes relevant publications from bibliographic queries in PubMed, conference proceedings, the ACM Digital Library, and interesting publications referenced in already included papers.</p> <p>Results</p> <p>The literature search returned more than 200 publications. The majority focused only on structured data de-identification instead of narrative text, on image de-identification, or described manual de-identification, and were therefore excluded. Finally, 18 publications describing automated text de-identification were selected for detailed analysis of the architecture and methods used, the types of PHI detected and removed, the external resources used, and the types of clinical documents targeted. All text de-identification systems aimed to identify and remove person names, and many included other types of PHI. Most systems used only one or two specific clinical document types, and were mostly based on two different groups of methodologies: pattern matching and machine learning. Many systems combined both approaches for different types of PHI, but the majority relied only on pattern matching, rules, and dictionaries.</p> <p>Conclusions</p> <p>In general, methods based on dictionaries performed better with PHI that is rarely mentioned in clinical text, but are more difficult to generalize. Methods based on machine learning tend to perform better, especially with PHI that is not mentioned in the dictionaries used. Finally, the issues of anonymization, sufficient performance, and "over-scrubbing" are discussed in this publication.</p

    The Long-Baseline Neutrino Experiment: Exploring Fundamental Symmetries of the Universe

    Get PDF
    The preponderance of matter over antimatter in the early Universe, the dynamics of the supernova bursts that produced the heavy elements necessary for life and whether protons eventually decay --- these mysteries at the forefront of particle physics and astrophysics are key to understanding the early evolution of our Universe, its current state and its eventual fate. The Long-Baseline Neutrino Experiment (LBNE) represents an extensively developed plan for a world-class experiment dedicated to addressing these questions. LBNE is conceived around three central components: (1) a new, high-intensity neutrino source generated from a megawatt-class proton accelerator at Fermi National Accelerator Laboratory, (2) a near neutrino detector just downstream of the source, and (3) a massive liquid argon time-projection chamber deployed as a far detector deep underground at the Sanford Underground Research Facility. This facility, located at the site of the former Homestake Mine in Lead, South Dakota, is approximately 1,300 km from the neutrino source at Fermilab -- a distance (baseline) that delivers optimal sensitivity to neutrino charge-parity symmetry violation and mass ordering effects. This ambitious yet cost-effective design incorporates scalability and flexibility and can accommodate a variety of upgrades and contributions. With its exceptional combination of experimental configuration, technical capabilities, and potential for transformative discoveries, LBNE promises to be a vital facility for the field of particle physics worldwide, providing physicists from around the globe with opportunities to collaborate in a twenty to thirty year program of exciting science. In this document we provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess.Comment: Major update of previous version. This is the reference document for LBNE science program and current status. Chapters 1, 3, and 9 provide a comprehensive overview of LBNE's scientific objectives, its place in the landscape of neutrino physics worldwide, the technologies it will incorporate and the capabilities it will possess. 288 pages, 116 figure

    Assessment of obsessive-compulsive symptom dimensions: development and evaluation

    Get PDF
    Although several measures of obsessive-compulsive (OC) symptoms exist, most are limited in that they are not consistent with the most recent empirical findings on the nature and dimensional structure of obsessions and compulsions. In the present research, the authors developed and evaluated a measure called the Dimensional Obsessive-Compulsive Scale (DOCS) to address limitations of existing OC symptom measures. The DOCS is a 20-item measure that assesses the four dimensions of OC symptoms most reliably replicated in previous structural research. Factorial validity of the DOCS was supported by exploratory and confirmatory factor analyses of 3 samples, including individuals with OC disorder, those with other anxiety disorders, and nonclinical individuals. Scores on the DOCS displayed good performance on indices of reliability and validity, as well as sensitivity to treatment and diagnostic sensitivity, and hold promise as a measure of OC symptoms in clinical and research settings

    Do advertisements for antihypertensive drugs in Australia promote quality prescribing? A cross-sectional study

    Get PDF
    Background Antihypertensive medications are widely prescribed by doctors and heavily promoted by the pharmaceutical industry. Despite strong evidence of the effectiveness and cost-effectiveness of thiazide diuretics, trends in both promotion and prescription of antihypertensive drugs favour newer, less cost-effective agents. Observational evidence shows correlations between exposure to pharmaceutical promotion and less ideal prescribing. Our study therefore aimed to determine whether print advertisements for antihypertensive medications promote quality prescribing in hypertension. Methods We performed a cross-sectional study of 113 advertisements for antihypertensive drugs from 4 general practice-oriented Australian medical publications in 2004. Advertisements were evaluated using a quality checklist based on a review of hypertension management guidelines. Main outcome measures included: frequency with which antihypertensive classes were advertised, promotion of thiazide class drugs as first line agents, use of statistical claims in advertisements, mention of harms and prices in the advertisements, promotion of assessment and treatment of cardiovascular risk, promotion of lifestyle modification, and targeting of particular patient subgroups. Results Thiazides were the most frequently advertised drug class (48.7% of advertisements), but were largely promoted in combination preparations. The only thiazide advertised as a single agent was the most expensive, indapamide. No advertisement specifically promoted any thiazide as a better first-line drug. Statistics in the advertisements tended to be expressed in relative rather than absolute terms. Drug costs were often reported, but without cost comparisons between drugs. Adverse effects were usually reported but largely confined to the advertisements' small print. Other than mentioning drug interactions with alcohol and salt, no advertisements promoted lifestyle modification. Few advertisements (2.7%) promoted the assessment of cardiovascular risk. Conclusion Print advertisements for antihypertensive medications in Australia provide some, but not all, of the key messages required for guideline-concordant care. These results have implications for the regulation of drug advertising and the continuing education of doctors.Brett D Montgomery, Peter R Mansfield, Geoffrey K Spurling and Alison M War

    Perceptual Load-Dependent Neural Correlates of Distractor Interference Inhibition

    Get PDF
    The load theory of selective attention hypothesizes that distractor interference is suppressed after perceptual processing (i.e., in the later stage of central processing) at low perceptual load of the central task, but in the early stage of perceptual processing at high perceptual load. Consistently, studies on the neural correlates of attention have found a smaller distractor-related activation in the sensory cortex at high relative to low perceptual load. However, it is not clear whether the distractor-related activation in brain regions linked to later stages of central processing (e.g., in the frontostriatal circuits) is also smaller at high rather than low perceptual load, as might be predicted based on the load theory.We studied 24 healthy participants using functional magnetic resonance imaging (fMRI) during a visual target identification task with two perceptual loads (low vs. high). Participants showed distractor-related increases in activation in the midbrain, striatum, occipital and medial and lateral prefrontal cortices at low load, but distractor-related decreases in activation in the midbrain ventral tegmental area and substantia nigra (VTA/SN), striatum, thalamus, and extensive sensory cortices at high load.Multiple levels of central processing involving midbrain and frontostriatal circuits participate in suppressing distractor interference at either low or high perceptual load. For suppressing distractor interference, the processing of sensory inputs in both early and late stages of central processing are enhanced at low load but inhibited at high load
    corecore