20,764 research outputs found
Recycling Information: Science Through Data Mining
An article considering the changes afoot in the world of Science and how the exponentially increasing amounts of recorded data are affecting the way in which scientists now work, for example with data mining. Changes in the way that resources become obsolete are also discussed and how more value must be placed on the work of professionals in digital curation
Text mining meets community curation: a newly designed curation platform to improve author experience and participation at WormBase
Biological knowledgebases rely on expert biocuration of the research literature to maintain up-to-date collections of data organized in machine-readable form. To enter information into knowledgebases, curators need to follow three steps: (i) identify papers containing relevant data, a process called triaging; (ii) recognize named entities; and (iii) extract and curate data in accordance with the underlying data models. WormBase (WB), the authoritative repository for research data on Caenorhabditis elegans and other nematodes, uses text mining (TM) to semi-automate its curation pipeline. In addition, WB engages its community, via an Author First Pass (AFP) system, to help recognize entities and classify data types in their recently published papers. In this paper, we present a new WB AFP system that combines TM and AFP into a single application to enhance community curation. The system employs string-searching algorithms and statistical methods (e.g. support vector machines (SVMs)) to extract biological entities and classify data types, and it presents the results to authors in a web form where they validate the extracted information, rather than enter it de novo as the previous form required. With this new system, we lessen the burden for authors, while at the same time receive valuable feedback on the performance of our TM tools. The new user interface also links out to specific structured data submission forms, e.g. for phenotype or expression pattern data, giving the authors the opportunity to contribute a more detailed curation that can be incorporated into WB with minimal curator review. Our approach is generalizable and could be applied to additional knowledgebases that would like to engage their user community in assisting with the curation. In the five months succeeding the launch of the new system, the response rate has been comparable with that of the previous AFP version, but the quality and quantity of the data received has greatly improved
Chemical information matters: an e-Research perspective on information and data sharing in the chemical sciences
Recently, a number of organisations have called for open access to scientific information and especially to the data obtained from publicly funded research, among which the Royal Society report and the European Commission press release are particularly notable. It has long been accepted that building research on the foundations laid by other scientists is both effective and efficient. Regrettably, some disciplines, chemistry being one, have been slow to recognise the value of sharing and have thus been reluctant to curate their data and information in preparation for exchanging it. The very significant increases in both the volume and the complexity of the datasets produced has encouraged the expansion of e-Research, and stimulated the development of methodologies for managing, organising, and analysing "big data". We review the evolution of cheminformatics, the amalgam of chemistry, computer science, and information technology, and assess the wider e-Science and e-Research perspective. Chemical information does matter, as do matters of communicating data and collaborating with data. For chemistry, unique identifiers, structure representations, and property descriptors are essential to the activities of sharing and exchange. Open science entails the sharing of more than mere facts: for example, the publication of negative outcomes can facilitate better understanding of which synthetic routes to choose, an aspiration of the Dial-a-Molecule Grand Challenge. The protagonists of open notebook science go even further and exchange their thoughts and plans. We consider the concepts of preservation, curation, provenance, discovery, and access in the context of the research lifecycle, and then focus on the role of metadata, particularly the ontologies on which the emerging chemical Semantic Web will depend. Among our conclusions, we present our choice of the "grand challenges" for the preservation and sharing of chemical information
PeptiCKDdb-peptide- and protein-centric database for the investigation of genesis and progression of chronic kidney disease
The peptiCKDdb is a publicly available database platform dedicated to support research in the field of chronic kidney disease (CKD) through identification of novel biomarkers and molecular features of this complex pathology. PeptiCKDdb collects peptidomics and proteomics datasets manually extracted from published studies related to CKD. Datasets from peptidomics or proteomics, human case/control studies on CKD and kidney or urine profiling were included. Data from 114 publications (studies of body fluids and kidney tissue: 26 peptidomics and 76 proteomics manuscripts on human CKD, and 12 focusing on healthy proteome profiling) are currently deposited and the content is quarterly updated. Extracted datasets include information about the experimental setup, clinical study design, discovery-validation sample sizes and list of differentially expressed proteins (P-value < 0.05). A dedicated interactive web interface, equipped with multiparametric search engine, data export and visualization tools, enables easy browsing of the data and comprehensive analysis. In conclusion, this repository might serve as a source of data for integrative analysis or a knowledgebase for scientists seeking confirmation of their findings and as such, is expected to facilitate the modeling of molecular mechanisms underlying CKD and identification of biologically relevant biomarkers.Database URL: www.peptickddb.com
Large-scale event extraction from literature with multi-level gene normalization
Text mining for the life sciences aims to aid database curation, knowledge summarization and information retrieval through the automated processing of biomedical texts. To provide comprehensive coverage and enable full integration with existing biomolecular database records, it is crucial that text mining tools scale up to millions of articles and that their analyses can be unambiguously linked to information recorded in resources such as UniProt, KEGG, BioGRID and NCBI databases. In this study, we investigate how fully automated text mining of complex biomolecular events can be augmented with a normalization strategy that identifies biological concepts in text, mapping them to identifiers at varying levels of granularity, ranging from canonicalized symbols to unique gene and proteins and broad gene families. To this end, we have combined two state-of-the-art text mining components, previously evaluated on two community-wide challenges, and have extended and improved upon these methods by exploiting their complementary nature. Using these systems, we perform normalization and event extraction to create a large-scale resource that is publicly available, unique in semantic scope, and covers all 21.9 million PubMed abstracts and 460 thousand PubMed Central open access full-text articles. This dataset contains 40 million biomolecular events involving 76 million gene/protein mentions, linked to 122 thousand distinct genes from 5032 species across the full taxonomic tree. Detailed evaluations and analyses reveal promising results for application of this data in database and pathway curation efforts. The main software components used in this study are released under an open-source license. Further, the resulting dataset is freely accessible through a novel API, providing programmatic and customized access (http://www.evexdb.org/api/v001/). Finally, to allow for large-scale bioinformatic analyses, the entire resource is available for bulk download from http://evexdb.org/download/, under the Creative Commons -Attribution - Share Alike (CC BY-SA) license
Integrative biological simulation praxis: Considerations from physics, philosophy, and data/model curation practices
Integrative biological simulations have a varied and controversial history in
the biological sciences. From computational models of organelles, cells, and
simple organisms, to physiological models of tissues, organ systems, and
ecosystems, a diverse array of biological systems have been the target of
large-scale computational modeling efforts. Nonetheless, these research agendas
have yet to prove decisively their value among the broader community of
theoretical and experimental biologists. In this commentary, we examine a range
of philosophical and practical issues relevant to understanding the potential
of integrative simulations. We discuss the role of theory and modeling in
different areas of physics and suggest that certain sub-disciplines of physics
provide useful cultural analogies for imagining the future role of simulations
in biological research. We examine philosophical issues related to modeling
which consistently arise in discussions about integrative simulations and
suggest a pragmatic viewpoint that balances a belief in philosophy with the
recognition of the relative infancy of our state of philosophical
understanding. Finally, we discuss community workflow and publication practices
to allow research to be readily discoverable and amenable to incorporation into
simulations. We argue that there are aligned incentives in widespread adoption
of practices which will both advance the needs of integrative simulation
efforts as well as other contemporary trends in the biological sciences,
ranging from open science and data sharing to improving reproducibility.Comment: 10 page
Closing the loop: assisting archival appraisal and information retrieval in one sweep
In this article, we examine the similarities between the concept of appraisal, a process that takes place within the archives, and the concept of relevance judgement, a process fundamental to the evaluation of information retrieval systems. More specifically, we revisit selection criteria proposed as result of archival research, and work within the digital curation communities, and, compare them to relevance criteria as discussed within information retrieval's literature based discovery. We illustrate how closely these criteria relate to each other and discuss how understanding the relationships between the these disciplines could form a basis for proposing automated selection for archival processes and initiating multi-objective learning with respect to information retrieval
A text-mining system for extracting metabolic reactions from full-text articles
Background: Increasingly biological text mining research is focusing on the extraction of complex relationships
relevant to the construction and curation of biological networks and pathways. However, one important category of
pathway—metabolic pathways—has been largely neglected.
Here we present a relatively simple method for extracting metabolic reaction information from free text that scores
different permutations of assigned entities (enzymes and metabolites) within a given sentence based on the presence
and location of stemmed keywords. This method extends an approach that has proved effective in the context of the
extraction of protein–protein interactions.
Results: When evaluated on a set of manually-curated metabolic pathways using standard performance criteria, our
method performs surprisingly well. Precision and recall rates are comparable to those previously achieved for the
well-known protein-protein interaction extraction task.
Conclusions: We conclude that automated metabolic pathway construction is more tractable than has often been
assumed, and that (as in the case of protein–protein interaction extraction) relatively simple text-mining approaches can prove surprisingly effective. It is hoped that these results will provide an impetus to further research and act as a useful benchmark for judging the performance of more sophisticated methods that are yet to be developed
Digital forensics formats: seeking a digital preservation storage format for web archiving
In this paper we discuss archival storage formats from the point of view of digital curation and
preservation. Considering established approaches to data management as our jumping off point, we
selected seven format attributes which are core to the long term accessibility of digital materials.
These we have labeled core preservation attributes. These attributes are then used as evaluation
criteria to compare file formats belonging to five common categories: formats for archiving selected
content (e.g. tar, WARC), disk image formats that capture data for recovery or installation
(partimage, dd raw image), these two types combined with a selected compression algorithm (e.g.
tar+gzip), formats that combine packing and compression (e.g. 7-zip), and forensic file formats for
data analysis in criminal investigations (e.g. aff, Advanced Forensic File format). We present a
general discussion of the file format landscape in terms of the attributes we discuss, and make a
direct comparison between the three most promising archival formats: tar, WARC, and aff. We
conclude by suggesting the next steps to take the research forward and to validate the observations
we have made
Telescope Bibliographies: an Essential Component of Archival Data Management and Operations
Assessing the impact of astronomical facilities rests upon an evaluation of
the scientific discoveries which their data have enabled. Telescope
bibliographies, which link data products with the literature, provide a way to
use bibliometrics as an impact measure for the underlying data. In this paper
we argue that the creation and maintenance of telescope bibliographies should
be considered an integral part of an observatory's operations. We review the
existing tools, services, and workflows which support these curation
activities, giving an estimate of the effort and expertise required to maintain
an archive-based telescope bibliography.Comment: 10 pages, 3 figures, to appear in SPIE Astronomical Telescopes and
Instrumentation, SPIE Conference Series 844
- …