19,087 research outputs found

    Tricks to translating TB transcriptomics.

    Get PDF
    Transcriptomics and other high-throughput methods are increasingly applied to questions relating to tuberculosis (TB) pathogenesis. Whole blood transcriptomics has repeatedly been applied to define correlates of TB risk and has produced new insight into the late stage of disease pathogenesis. In a novel approach, authors of a recently published study in Science Translational Medicine applied complex data analysis of existing TB transcriptomic datasets, and in vitro models, in an attempt to identify correlates of protection in TB, which are crucially required for the development of novel TB diagnostics and therapeutics to halt this global epidemic. Utilizing latent TB infection (LTBI) as a surrogate of protection, they identified IL-32 as a mediator of interferon gamma (IFNÎł)-vitamin D dependent antimicrobial immunity and a marker of LTBI. Here, we provide a review of all TB whole-blood transcriptomic studies to date in the context of identifying correlates of protection, discuss potential pitfalls of combining complex analyses originating from such studies, the importance of detailed metadata to interpret differential patient classification algorithms, the effect of differing circulating cell populations between patient groups on the interpretation of resulting biomarkers and we decipher weighted gene co-expression network analysis (WGCNA), a recently developed systems biology tool which holds promise of identifying novel pathway interactions in disease pathogenesis. In conclusion, we propose the development of an integrated OMICS platform and open access to detailed metadata, in order for the TB research community to leverage the vast array of OMICS data being generated with the aim of unraveling the holy grail of TB research: correlates of protection

    Public Health and Epidemiology Informatics: Recent Research and Trends in the United States

    Get PDF
    Objectives To survey advances in public health and epidemiology informatics over the past three years. Methods We conducted a review of English-language research works conducted in the domain of public health informatics (PHI), and published in MEDLINE between January 2012 and December 2014, where information and communication technology (ICT) was a primary subject, or a main component of the study methodology. Selected articles were synthesized using a thematic analysis using the Essential Services of Public Health as a typology. Results Based on themes that emerged, we organized the advances into a model where applications that support the Essential Services are, in turn, supported by a socio-technical infrastructure that relies on government policies and ethical principles. That infrastructure, in turn, depends upon education and training of the public health workforce, development that creates novel or adapts existing infrastructure, and research that evaluates the success of the infrastructure. Finally, the persistence and growth of infrastructure depends on financial sustainability. Conclusions Public health informatics is a field that is growing in breadth, depth, and complexity. Several Essential Services have benefited from informatics, notably, “Monitor Health,” “Diagnose & Investigate,” and “Evaluate.” Yet many Essential Services still have not yet benefited from advances such as maturing electronic health record systems, interoperability amongst health information systems, analytics for population health management, use of social media among consumers, and educational certification in clinical informatics. There is much work to be done to further advance the science of PHI as well as its impact on public health practice

    Infectious Disease Ontology

    Get PDF
    Technological developments have resulted in tremendous increases in the volume and diversity of the data and information that must be processed in the course of biomedical and clinical research and practice. Researchers are at the same time under ever greater pressure to share data and to take steps to ensure that data resources are interoperable. The use of ontologies to annotate data has proven successful in supporting these goals and in providing new possibilities for the automated processing of data and information. In this chapter, we describe different types of vocabulary resources and emphasize those features of formal ontologies that make them most useful for computational applications. We describe current uses of ontologies and discuss future goals for ontology-based computing, focusing on its use in the field of infectious diseases. We review the largest and most widely used vocabulary resources relevant to the study of infectious diseases and conclude with a description of the Infectious Disease Ontology (IDO) suite of interoperable ontology modules that together cover the entire infectious disease domain

    Prediction and prevention of the next pandemic zoonosis.

    Get PDF
    Most pandemics--eg, HIV/AIDS, severe acute respiratory syndrome, pandemic influenza--originate in animals, are caused by viruses, and are driven to emerge by ecological, behavioural, or socioeconomic changes. Despite their substantial effects on global public health and growing understanding of the process by which they emerge, no pandemic has been predicted before infecting human beings. We review what is known about the pathogens that emerge, the hosts that they originate in, and the factors that drive their emergence. We discuss challenges to their control and new efforts to predict pandemics, target surveillance to the most crucial interfaces, and identify prevention strategies. New mathematical modelling, diagnostic, communications, and informatics technologies can identify and report hitherto unknown microbes in other species, and thus new risk assessment approaches are needed to identify microbes most likely to cause human disease. We lay out a series of research and surveillance opportunities and goals that could help to overcome these challenges and move the global pandemic strategy from response to pre-emption

    Before the Pandemic Ends: Making Sure This Never Happens Again

    Get PDF
    Introduction On 30 January 2020, the World Health Organization (WHO) declared a Global Health Emergency of international concern attendant to the emergence and spread of SARS-CoV-2, nearly two months after the first reported emergence of human cases in Wuhan, China. In the subsequent two months, global, national and local health personnel and infrastructures have been overwhelmed, leading to suffering and death for infected people, and the threat of socio-economic instability and potential collapse for humanity as a whole. This shows that our current and traditional mode of coping, anchored in responses after the fact, is not capable of dealing with the crisis of emerging infectious disease. Given all of our technological expertise, why is there an emerging disease crisis, and why are we losing the battle to contain and diminish emerging diseases? Part of the reason is that the prevailing paradigm explaining the biology of pathogen-host associations (coevolution, evolutionary arms races) has assumed that pathogens must evolve new capacities - special mutations – in order to colonize new hosts and produce emergent disease (e.g. Parrish and Kawaoka, 2005). In this erroneous but broadly prevalent view, the evolution of new capacities creates new opportunities for pathogens. Further, given that mutations are both rare and undirected, the highly specialized nature of pathogen-host relationships should produce an evolutionary firewall limiting dissemination; by those definitions, emergences should be rare (for a historical review see Brooks et al., 2019). Pathogens, however, have become far better at finding us than our traditional understanding predicts. We face considerable risk space for pathogens and disease that directly threaten us, our crops and livestock – through expanding interfaces bringing pathogens and hosts into increasing proximity, exacerbated by environmental disruption and urban density, fueled by globalized trade and travel. We need a new paradigm that explains what we are seeing. Additional section headers: The Stockholm Paradigm The DAMA Protocol A Sense of Urgency and Long-Term Commitment Reference

    Ontology-based knowledge representation of experiment metadata in biological data mining

    Get PDF
    According to the PubMed resource from the U.S. National Library of Medicine, over 750,000 scientific articles have been published in the ~5000 biomedical journals worldwide in the year 2007 alone. The vast majority of these publications include results from hypothesis-driven experimentation in overlapping biomedical research domains. Unfortunately, the sheer volume of information being generated by the biomedical research enterprise has made it virtually impossible for investigators to stay aware of the latest findings in their domain of interest, let alone to be able to assimilate and mine data from related investigations for purposes of meta-analysis. While computers have the potential for assisting investigators in the extraction, management and analysis of these data, information contained in the traditional journal publication is still largely unstructured, free-text descriptions of study design, experimental application and results interpretation, making it difficult for computers to gain access to the content of what is being conveyed without significant manual intervention. In order to circumvent these roadblocks and make the most of the output from the biomedical research enterprise, a variety of related standards in knowledge representation are being developed, proposed and adopted in the biomedical community. In this chapter, we will explore the current status of efforts to develop minimum information standards for the representation of a biomedical experiment, ontologies composed of shared vocabularies assembled into subsumption hierarchical structures, and extensible relational data models that link the information components together in a machine-readable and human-useable framework for data mining purposes

    Automated Detection of Systematic Off-label Drug Use in Free Text of Electronic Medical Records.

    Get PDF
    Off-label use of a drug occurs when it is used in a manner that deviates from its FDA label. Studies estimate that 21% of prescriptions are off-label, with only 27% of those uses supported by evidence of safety and efficacy. We have developed methods to detect population level off-label usage using computationally efficient annotation of free text from clinical notes to generate features encoding empirical information about drug-disease mentions. By including additional features encoding prior knowledge about drugs, diseases, and known usage, we trained a highly accurate predictive model that was used to detect novel candidate off-label usages in a very large clinical corpus. We show that the candidate uses are plausible and can be prioritized for further analysis in terms of safety and efficacy
    • …
    corecore