196 research outputs found

    Nanoinformatics: developing new computing applications for nanomedicine

    Get PDF
    Nanoinformatics has recently emerged to address the need of computing applications at the nano level. In this regard, the authors have participated in various initiatives to identify its concepts, foundations and challenges. While nanomaterials open up the possibility for developing new devices in many industrial and scientific areas, they also offer breakthrough perspectives for the prevention, diagnosis and treatment of diseases. In this paper, we analyze the different aspects of nanoinformatics and suggest five research topics to help catalyze new research and development in the area, particularly focused on nanomedicine. We also encompass the use of informatics to further the biological and clinical applications of basic research in nanoscience and nanotechnology, and the related concept of an extended ?nanotype? to coalesce information related to nanoparticles. We suggest how nanoinformatics could accelerate developments in nanomedicine, similarly to what happened with the Human Genome and other -omics projects, on issues like exchanging modeling and simulation methods and tools, linking toxicity information to clinical and personal databases or developing new approaches for scientific ontologies, among many others

    Librarians as Members of Integrated Institutional Information Programs: Management and Organizational Issues

    Get PDF
    published or submitted for publicatio

    Medical Informatics

    Get PDF
    The topic for this paper is an overview of medical informatics in the United States, what it is, how it is used in patient care, and what role librarians play in it. Medical informatics, a term first coined in Europe in the early 1970's, encompasses the disciplines of computer science and medicine. Medical informatics is a relatively new field, with its beginning in the 1950's. The first scholarly papers written in the field that was to become medical informatics are found in the literature of the engineering society, Professional Group in Bio-Medical Electronics of the Institute of Radio Engineers (IRE). This group published papers on the term "biomedical computing" in its annual conference proceedings known as the IRE Transactions on Medical Electronics. Although the definition of medical informatics may be stated as applying the power of computers to the medical field, there are many variant definitions to be found in the literature

    Wüsteria

    Get PDF
    The last two decades have seen considerable efforts directed towards making Electronic Health Records interoperable through improvements in medical ontologies, terminologies and coding systems. Unfortunately, these efforts have been hampered by a number of influential ideas inherited from the work of Eugen Wüster, the father of terminology standardization and the founder of ISO TC 37. We here survey Wüster’s ideas – which see terminology work as being focused on the classification of concepts in people’s minds – and we argue that they serve still as the basis for a series of influential confusions. We argue further that an ontology based unambiguously, not on concepts, but on the classification of entities in reality can, by removing these confusions, make a vital contribution to ensuring the interoperability of coding systems and healthcare records in the future

    Work in Progress: What is "Enough"?

    Get PDF
    This poster presents dissertation work in progress on the question of ???enough.??? The research focus is the assessment of ???enough??? information to make a decision, in particular a medical decision determining the diagnosis of a patient. ???Enough??? is considered ???enough??? information to facilitate making a decision or taking an action. Qualities of qualities of ???enough??? are identified and described by analyzing case reports published in the New England Journal of Medicine. Findings are reported, and contribute to the development of a conceptual model of factors contributing to ???enough.??

    The significance of SNODENT

    Get PDF
    SNODENT is a dental diagnostic vocabulary incompletely integrated in SNOMED-CT. Nevertheless, SNODENT could become the de facto standard for dental diagnostic coding. SNODENT's manageable size, the fact that it is administratively self-contained, and relates to a well-understood domain provides valuable opportunities to formulate and test, in controlled experiments, a series of hypothesis concerning diagnostic systems. Of particular interest are questions related to establishing appropriate quality assurance methods for its optimal level of detail in content, its ontological structure, its construction and maintenance. This paper builds on previous–software-based methodologies designed to assess the quality of SNOMED-CT. When applied to SNODENT several deficiencies were uncovered. 9.52% of SNODENT terms point to concepts in SNOMED-CT that have some problem. 18.53% of SNODENT terms point to SNOMED-CT concepts do not have, in SNOMED, the term used by SNODENT. Other findings include the absence of a clear specification of the exact relationship between a term and a termcode in SNODENT and the improper assignment of the same termcode to terms with significantly different meanings. An analysis of the way in which SNODENT is structurally integrated into SNOMED resulted in the generation of 1081 new termcodes reflecting entities not present in the SNOMED tables but required by SNOMED's own description logic based classification principles. Our results show that SNODENT requires considerable enhancements in content, quality of coding, quality of ontological structure and the manner in which it is integrated and aligned with SNOMED. We believe that methods for the analysis of the quality of diagnostic coding systems must be developed and employed if such systems are to be used effectively in both clinical practice and clinical research

    Understanding the errors of SHAPE-directed RNA structure modeling

    Full text link
    Single-nucleotide-resolution chemical mapping for structured RNA is being rapidly advanced by new chemistries, faster readouts, and coupling to computational algorithms. Recent tests have shown that selective 2'-hydroxyl acylation by primer extension (SHAPE) can give near-zero error rates (0-2%) in modeling the helices of RNA secondary structure. Here, we benchmark the method using six molecules for which crystallographic data are available: tRNA(phe) and 5S rRNA from Escherichia coli, the P4-P6 domain of the Tetrahymena group I ribozyme, and ligand-bound domains from riboswitches for adenine, cyclic di-GMP, and glycine. SHAPE-directed modeling of these highly structured RNAs gave an overall false negative rate (FNR) of 17% and a false discovery rate (FDR) of 21%, with at least one helix prediction error in five of the six cases. Extensive variations of data processing, normalization, and modeling parameters did not significantly mitigate modeling errors. Only one varation, filtering out data collected with deoxyinosine triphosphate during primer extension, gave a modest improvement (FNR = 12%, and FDR = 14%). The residual structure modeling errors are explained by the insufficient information content of these RNAs' SHAPE data, as evaluated by a nonparametric bootstrapping analysis. Beyond these benchmark cases, bootstrapping suggests a low level of confidence (<50%) in the majority of helices in a previously proposed SHAPE-directed model for the HIV-1 RNA genome. Thus, SHAPE-directed RNA modeling is not always unambiguous, and helix-by-helix confidence estimates, as described herein, may be critical for interpreting results from this powerful methodology.Comment: Biochemistry, Article ASAP (Aug. 15, 2011

    Evidence-based Health Informatics Frameworks for Applied Use.

    Get PDF
    Health Informatics frameworks have been created surrounding the implementation, optimization, adoption, use and evaluation of health information technology including electronic health record systems and medical devices. In this contribution, established health informatics frameworks are presented. Important considerations for each framework are its purpose, component parts, rigor of development, the level of testing and validation its undergone, and its limitations. In order to understand how to use a framework effectively, it's often necessary to seek additional explanation via literature, documentation, and discussions with the developers

    Topical Classification of Food Safety Publications with a Knowledge Base

    Full text link
    The vast body of scientific publications presents an increasing challenge of finding those that are relevant to a given research question, and making informed decisions on their basis. This becomes extremely difficult without the use of automated tools. Here, one possible area for improvement is automatic classification of publication abstracts according to their topic. This work introduces a novel, knowledge base-oriented publication classifier. The proposed method focuses on achieving scalability and easy adaptability to other domains. Classification speed and accuracy are shown to be satisfactory, in the very demanding field of food safety. Further development and evaluation of the method is needed, as the proposed approach shows much potential
    • …
    corecore