473 research outputs found

    Chemical information matters: an e-Research perspective on information and data sharing in the chemical sciences

    No full text
    Recently, a number of organisations have called for open access to scientific information and especially to the data obtained from publicly funded research, among which the Royal Society report and the European Commission press release are particularly notable. It has long been accepted that building research on the foundations laid by other scientists is both effective and efficient. Regrettably, some disciplines, chemistry being one, have been slow to recognise the value of sharing and have thus been reluctant to curate their data and information in preparation for exchanging it. The very significant increases in both the volume and the complexity of the datasets produced has encouraged the expansion of e-Research, and stimulated the development of methodologies for managing, organising, and analysing "big data". We review the evolution of cheminformatics, the amalgam of chemistry, computer science, and information technology, and assess the wider e-Science and e-Research perspective. Chemical information does matter, as do matters of communicating data and collaborating with data. For chemistry, unique identifiers, structure representations, and property descriptors are essential to the activities of sharing and exchange. Open science entails the sharing of more than mere facts: for example, the publication of negative outcomes can facilitate better understanding of which synthetic routes to choose, an aspiration of the Dial-a-Molecule Grand Challenge. The protagonists of open notebook science go even further and exchange their thoughts and plans. We consider the concepts of preservation, curation, provenance, discovery, and access in the context of the research lifecycle, and then focus on the role of metadata, particularly the ontologies on which the emerging chemical Semantic Web will depend. Among our conclusions, we present our choice of the "grand challenges" for the preservation and sharing of chemical information

    Connected Information Management

    Get PDF
    Society is currently inundated with more information than ever, making efficient management a necessity. Alas, most of current information management suffers from several levels of disconnectedness: Applications partition data into segregated islands, small notes don’t fit into traditional application categories, navigating the data is different for each kind of data; data is either available at a certain computer or only online, but rarely both. Connected information management (CoIM) is an approach to information management that avoids these ways of disconnectedness. The core idea of CoIM is to keep all information in a central repository, with generic means for organization such as tagging. The heterogeneity of data is taken into account by offering specialized editors. The central repository eliminates the islands of application-specific data and is formally grounded by a CoIM model. The foundation for structured data is an RDF repository. The RDF editing meta-model (REMM) enables form-based editing of this data, similar to database applications such as MS access. Further kinds of data are supported by extending RDF, as follows. Wiki text is stored as RDF and can both contain structured text and be combined with structured data. Files are also supported by the CoIM model and are kept externally. Notes can be quickly captured and annotated with meta-data. Generic means for organization and navigation apply to all kinds of data. Ubiquitous availability of data is ensured via two CoIM implementations, the web application HYENA/Web and the desktop application HYENA/Eclipse. All data can be synchronized between these applications. The applications were used to validate the CoIM ideas

    Guide to Social Science Data Preparation and Archiving: Best Practice Throughout the Data Life Cycle

    Full text link
    http://deepblue.lib.umich.edu/bitstream/2027.42/134032/1/dataprep.pdfDescription of dataprep.pdf : Boo

    LinkHub: a Semantic Web system that facilitates cross-database queries and information retrieval in proteomics

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>A key abstraction in representing proteomics knowledge is the notion of unique identifiers for individual entities (e.g. proteins) and the massive graph of relationships among them. These relationships are sometimes simple (e.g. synonyms) but are often more complex (e.g. one-to-many relationships in protein family membership).</p> <p>Results</p> <p>We have built a software system called LinkHub using Semantic Web RDF that manages the graph of identifier relationships and allows exploration with a variety of interfaces. For efficiency, we also provide relational-database access and translation between the relational and RDF versions. LinkHub is practically useful in creating small, local hubs on common topics and then connecting these to major portals in a federated architecture; we have used LinkHub to establish such a relationship between UniProt and the North East Structural Genomics Consortium. LinkHub also facilitates queries and access to information and documents related to identifiers spread across multiple databases, acting as "connecting glue" between different identifier spaces. We demonstrate this with example queries discovering "interologs" of yeast protein interactions in the worm and exploring the relationship between gene essentiality and pseudogene content. We also show how "protein family based" retrieval of documents can be achieved. LinkHub is available at hub.gersteinlab.org and hub.nesg.org with supplement, database models and full-source code.</p> <p>Conclusion</p> <p>LinkHub leverages Semantic Web standards-based integrated data to provide novel information retrieval to identifier-related documents through relational graph queries, simplifies and manages connections to major hubs such as UniProt, and provides useful interactive and query interfaces for exploring the integrated data.</p

    Application of Neuroanatomical Ontologies for Neuroimaging Data Annotation

    Get PDF
    The annotation of functional neuroimaging results for data sharing and re-use is particularly challenging, due to the diversity of terminologies of neuroanatomical structures and cortical parcellation schemes. To address this challenge, we extended the Foundational Model of Anatomy Ontology (FMA) to include cytoarchitectural, Brodmann area labels, and a morphological cortical labeling scheme (e.g., the part of Brodmann area 6 in the left precentral gyrus). This representation was also used to augment the neuroanatomical axis of RadLex, the ontology for clinical imaging. The resulting neuroanatomical ontology contains explicit relationships indicating which brain regions are “part of” which other regions, across cytoarchitectural and morphological labeling schemas. We annotated a large functional neuroimaging dataset with terms from the ontology and applied a reasoning engine to analyze this dataset in conjunction with the ontology, and achieved successful inferences from the most specific level (e.g., how many subjects showed activation in a subpart of the middle frontal gyrus) to more general (how many activations were found in areas connected via a known white matter tract?). In summary, we have produced a neuroanatomical ontology that harmonizes several different terminologies of neuroanatomical structures and cortical parcellation schemes. This neuroanatomical ontology is publicly available as a view of FMA at the Bioportal website1. The ontological encoding of anatomic knowledge can be exploited by computer reasoning engines to make inferences about neuroanatomical relationships described in imaging datasets using different terminologies. This approach could ultimately enable knowledge discovery from large, distributed fMRI studies or medical record mining

    Discovering knowledge structures in mind maps of mental health risks

    Get PDF
    This thesis addressed the problem of risk analysis in mental healthcare, with respect to the GRiST project at Aston University. That project provides a risk-screening tool based on the knowledge of 46 experts, captured as mind maps that describe relationships between risks and patterns of behavioural cues. Mind mapping, though, fails to impose control over content, and is not considered to formally represent knowledge. In contrast, this thesis treated GRiSTs mind maps as a rich knowledge base in need of refinement; that process drew on existing techniques for designing databases and knowledge bases. Identifying well-defined mind map concepts, though, was hindered by spelling mistakes, and by ambiguity and lack of coverage in the tools used for researching words. A novel use of the Edit Distance overcame those problems, by assessing similarities between mind map texts, and between spelling mistakes and suggested corrections. That algorithm further identified stems, the shortest text string found in related word-forms. As opposed to existing approaches’ reliance on built-in linguistic knowledge, this thesis devised a novel, more flexible text-based technique. An additional tool, Correspondence Analysis, found patterns in word usage that allowed machines to determine likely intended meanings for ambiguous words. Correspondence Analysis further produced clusters of related concepts, which in turn drove the automatic generation of novel mind maps. Such maps underpinned adjuncts to the mind mapping software used by GRiST; one such new facility generated novel mind maps, to reflect the collected expert knowledge on any specified concept. Mind maps from GRiST are stored as XML, which suggested storing them in an XML database. In fact, the entire approach here is ”XML-centric”, in that all stages rely on XML as far as possible. A XML-based query language allows user to retrieve information from the mind map knowledge base. The approach, it was concluded, will prove valuable to mind mapping in general, and to detecting patterns in any type of digital information

    Sistema multiagente como apoyo de procesos de trasplante de órganos

    Get PDF
    This article is a review that leads to build a state of knowledge about Multi-Agent Systems based on Selection and Search (ISSA) applied in the search and selection of organ and tissue transplant recipients, emphasizing as a case study the heart, using Geo-location. In particular, this research analyzes technical, scientific and normative aspects of ISSA, between 2007 and 2017, in Europe (Spain) and Latin America (Colombia). A base line of systems based on Artificial Intelligence is thus obtained for the selection and search of transplant recipients, before a possible demand of the List of Persons Waiting for Donation (LED). From the above, solutions can be implemented reducing time in the allocation of organs taking into account their characteristics and compatibility: blood group, size, location criteria, among others, from one to several possible recipients. Finally, a technological solution model for Colombia is proposed.El presente artículo es una revisión que conduce a construir un estado de conocimiento sobre Sistemas Multi Agentes basados en Selección y Búsqueda (AISB) aplicados en la búsqueda y selección de receptores de trasplante de órganos y tejidos, enfatizando como caso de estudio el corazón, utilizando Geo- localización. Particularmente, esta investigación analiza aspectos técnicos, científicos y normativos de AISB, entre el 2007 y 2017, en Europa (España) y Latino América (Colombia). Se obtiene en consecuencia una línea de base de sistemas basados en Inteligencia Artificial para la selección y búsqueda de receptores de trasplante, ante una posible demanda de Lista de Personas en Espera de Donación (LED). De lo anterior, se pueden implementar soluciones reduciendo tiempos en la asignación de los órganos teniendo en cuenta sus características y la compatibilidad: de grupo sanguíneo, tamaño, criterios de ubicación, entre otros, de uno a varios posibles receptores. Finalmente se propone un modelo de solución tecnológica para Colombia

    Lightweight Federation of Non-Cooperating Digital Libraries

    Get PDF
    This dissertation studies the challenges and issues faced in federating heterogeneous digital libraries (DLs). The objective of this research is to demonstrate the feasibility of interoperability among non-cooperating DLs by presenting a lightweight, data driven approach, or Data Centered Interoperability (DCI). We build a Lightweight Federated Digital Library (LFDL) system to provide federated search service for existing digital libraries with no prior coordination. We describe the motivation, architecture, design and implementation of the LFDL. We develop, deploy, and evaluate key services of the federation. The major difference to existing DL interoperability approaches is one where we do not insist on cooperation among DLs, that is, they do not have to change anything in their system or processes. The underlying approach is to have a dynamic federation where digital libraries can be added (removed) to the federation in real-time. This is made possible by describing the behavior of participating DLs in an XML-based language that the federation engine understands. The major contributions of this work are: (1) This dissertation addresses the interoperability issues among non-cooperating DLs and presents a practical and efficient approach toward providing federated search service for those DLs. The DL itself remains autonomous and does not need to change its structure, data format, protocol and other internal features when it is added to the federation. (2) The implementation of the LFDL is based on a lightweight, dynamic, data-centered and rule-driven architecture. To add a DL to the federation, all that is needed is observing a DL\u27s interaction with the user and storing the interaction specification in a human-readable and highly maintainable format. The federation engine provides the federated service based on the specification of a DL. A registration service allows dynamic DL registration, removal, or modification. No code needs to be rewritten or recompiled to add or change a DL. These notions are achieved by designing a new specification language in XML format and a powerful processing engine that enforces and implements the rules specified using the language. (3) In this thesis we explore an alternate approach where searches are distributed to participating DLs in real time. We have addressed the performance and reliability problems associated with other distributed search approaches. This is achieved by a locally maintained metadata repository extracted from DLs, as well as an efficient caching system based on the repository

    Clinical foundations and information architecture for the implementation of a federated health record service

    Get PDF
    Clinical care increasingly requires healthcare professionals to access patient record information that may be distributed across multiple sites, held in a variety of paper and electronic formats, and represented as mixtures of narrative, structured, coded and multi-media entries. A longitudinal person-centred electronic health record (EHR) is a much-anticipated solution to this problem, but its realisation is proving to be a long and complex journey. This Thesis explores the history and evolution of clinical information systems, and establishes a set of clinical and ethico-legal requirements for a generic EHR server. A federation approach (FHR) to harmonising distributed heterogeneous electronic clinical databases is advocated as the basis for meeting these requirements. A set of information models and middleware services, needed to implement a Federated Health Record server, are then described, thereby supporting access by clinical applications to a distributed set of feeder systems holding patient record information. The overall information architecture thus defined provides a generic means of combining such feeder system data to create a virtual electronic health record. Active collaboration in a wide range of clinical contexts, across the whole of Europe, has been central to the evolution of the approach taken. A federated health record server based on this architecture has been implemented by the author and colleagues and deployed in a live clinical environment in the Department of Cardiovascular Medicine at the Whittington Hospital in North London. This implementation experience has fed back into the conceptual development of the approach and has provided "proof-of-concept" verification of its completeness and practical utility. This research has benefited from collaboration with a wide range of healthcare sites, informatics organisations and industry across Europe though several EU Health Telematics projects: GEHR, Synapses, EHCR-SupA, SynEx, Medicate and 6WINIT. The information models published here have been placed in the public domain and have substantially contributed to two generations of CEN health informatics standards, including CEN TC/251 ENV 13606
    corecore