38,387 research outputs found

    Nanoinformatics: developing new computing applications for nanomedicine

    Get PDF
    Nanoinformatics has recently emerged to address the need of computing applications at the nano level. In this regard, the authors have participated in various initiatives to identify its concepts, foundations and challenges. While nanomaterials open up the possibility for developing new devices in many industrial and scientific areas, they also offer breakthrough perspectives for the prevention, diagnosis and treatment of diseases. In this paper, we analyze the different aspects of nanoinformatics and suggest five research topics to help catalyze new research and development in the area, particularly focused on nanomedicine. We also encompass the use of informatics to further the biological and clinical applications of basic research in nanoscience and nanotechnology, and the related concept of an extended ?nanotype? to coalesce information related to nanoparticles. We suggest how nanoinformatics could accelerate developments in nanomedicine, similarly to what happened with the Human Genome and other -omics projects, on issues like exchanging modeling and simulation methods and tools, linking toxicity information to clinical and personal databases or developing new approaches for scientific ontologies, among many others

    Barry Smith an sich

    Get PDF
    Festschrift in Honor of Barry Smith on the occasion of his 65th Birthday. Published as issue 4:4 of the journal Cosmos + Taxis: Studies in Emergent Order and Organization. Includes contributions by Wolfgang Grassl, Nicola Guarino, John T. Kearns, Rudolf Lüthe, Luc Schneider, Peter Simons, Wojciech Żełaniec, and Jan Woleński

    Autonomic computing architecture for SCADA cyber security

    Get PDF
    Cognitive computing relates to intelligent computing platforms that are based on the disciplines of artificial intelligence, machine learning, and other innovative technologies. These technologies can be used to design systems that mimic the human brain to learn about their environment and can autonomously predict an impending anomalous situation. IBM first used the term ‘Autonomic Computing’ in 2001 to combat the looming complexity crisis (Ganek and Corbi, 2003). The concept has been inspired by the human biological autonomic system. An autonomic system is self-healing, self-regulating, self-optimising and self-protecting (Ganek and Corbi, 2003). Therefore, the system should be able to protect itself against both malicious attacks and unintended mistakes by the operator

    Model of professional retraining of teachers based on the development of STEM competencies

    Get PDF
    The article describes a methodology for organizing lifelong learning, professional retraining of teachers in STEM field and their lifelong learning in Volodymyr Hnatiuk Ternopil National Pedagogical University (Ukraine). It analyzes foreign and domestic approaches and concepts for the implementation of STEM in educational institutions. A model of retraining teachers in the prospect of developing their STEM competencies and a model of STEM competencies were created. The developed model of STEM competencies for professional teacher training and lifelong learning includes four components (Problem solving, Working with people, Work with technology, Work with organizational system), which are divided into three domains of STEM competencies: Skills, Knowledge, Work activities. In order to implement and adapt the model of STEM competencies to the practice of the educational process, an experimental study was conducted. The article describes the content of the scientific research and the circle of respondents and analyzes the results of the research

    SNOMED CT standard ontology based on the ontology for general medical science

    Get PDF
    Background: Systematized Nomenclature of Medicine—Clinical Terms (SNOMED CT, hereafter abbreviated SCT) is acomprehensive medical terminology used for standardizing the storage, retrieval, and exchange of electronic healthdata. Some efforts have been made to capture the contents of SCT as Web Ontology Language (OWL), but theseefforts have been hampered by the size and complexity of SCT. Method: Our proposal here is to develop an upper-level ontology and to use it as the basis for defining the termsin SCT in a way that will support quality assurance of SCT, for example, by allowing consistency checks ofdefinitions and the identification and elimination of redundancies in the SCT vocabulary. Our proposed upper-levelSCT ontology (SCTO) is based on the Ontology for General Medical Science (OGMS). Results: The SCTO is implemented in OWL 2, to support automatic inference and consistency checking. Theapproach will allow integration of SCT data with data annotated using Open Biomedical Ontologies (OBO) Foundryontologies, since the use of OGMS will ensure consistency with the Basic Formal Ontology, which is the top-levelontology of the OBO Foundry. Currently, the SCTO contains 304 classes, 28 properties, 2400 axioms, and 1555annotations. It is publicly available through the bioportal athttp://bioportal.bioontology.org/ontologies/SCTO/. Conclusion: The resulting ontology can enhance the semantics of clinical decision support systems and semanticinteroperability among distributed electronic health records. In addition, the populated ontology can be used forthe automation of mobile health applications

    Results of Evolution Supervised by Genetic Algorithms

    Full text link
    A series of results of evolution supervised by genetic algorithms with interest to agricultural and horticultural fields are reviewed. New obtained original results from the use of genetic algorithms on structure-activity relationships are reported.Comment: 6 pages, 1 Table, 2 figure

    Profiling a decade of information systems frontiers’ research

    Get PDF
    This article analyses the first ten years of research published in the Information Systems Frontiers (ISF) from 1999 to 2008. The analysis of the published material includes examining variables such as most productive authors, citation analysis, universities associated with the most publications, geographic diversity, authors’ backgrounds and research methods. The keyword analysis suggests that ISF research has evolved from establishing concepts and domain of information systems (IS), technology and management to contemporary issues such as outsourcing, web services and security. The analysis presented in this paper has identified intellectually significant studies that have contributed to the development and accumulation of intellectual wealth of ISF. The analysis has also identified authors published in other journals whose work largely shaped and guided the researchers published in ISF. This research has implications for researchers, journal editors, and research institutions

    Beyond Volume: The Impact of Complex Healthcare Data on the Machine Learning Pipeline

    Full text link
    From medical charts to national census, healthcare has traditionally operated under a paper-based paradigm. However, the past decade has marked a long and arduous transformation bringing healthcare into the digital age. Ranging from electronic health records, to digitized imaging and laboratory reports, to public health datasets, today, healthcare now generates an incredible amount of digital information. Such a wealth of data presents an exciting opportunity for integrated machine learning solutions to address problems across multiple facets of healthcare practice and administration. Unfortunately, the ability to derive accurate and informative insights requires more than the ability to execute machine learning models. Rather, a deeper understanding of the data on which the models are run is imperative for their success. While a significant effort has been undertaken to develop models able to process the volume of data obtained during the analysis of millions of digitalized patient records, it is important to remember that volume represents only one aspect of the data. In fact, drawing on data from an increasingly diverse set of sources, healthcare data presents an incredibly complex set of attributes that must be accounted for throughout the machine learning pipeline. This chapter focuses on highlighting such challenges, and is broken down into three distinct components, each representing a phase of the pipeline. We begin with attributes of the data accounted for during preprocessing, then move to considerations during model building, and end with challenges to the interpretation of model output. For each component, we present a discussion around data as it relates to the healthcare domain and offer insight into the challenges each may impose on the efficiency of machine learning techniques.Comment: Healthcare Informatics, Machine Learning, Knowledge Discovery: 20 Pages, 1 Figur

    Applications of the ACGT Master Ontology on Cancer

    Get PDF
    In this paper we present applications of the ACGT Master Ontology (MO) which is a new terminology resource for a transnational network providing data exchange in oncology, emphasizing the integration of both clinical and molecular data. The development of a new ontology was necessary due to problems with existing biomedical ontologies in oncology. The ACGT MO is a test case for the application of best practices in ontology development. This paper provides an overview of the application of the ontology within the ACGT project thus far
    corecore