146 research outputs found
Measuring DHEA-S in saliva: time of day differences and positive correlations between two different types of collection methods
<p>Abstract</p> <p>Background</p> <p>The anabolic steroid, dehydroepiandosterone sulfate (DHEA-S), is secreted from the adrenal cortex. It plays a significant role in the body as a precursor to sex steroids as well as a lesser known role in the hypothalamic pituitary adrenal axis (HPA) response to stress. DHEA-S can be measured reliably in saliva, making saliva collection a valuable tool for health research because it minimizes the need for invasive sampling procedures (e.g., blood draws). Typical saliva collection methods include the use of plain cotton swab collection devices (e.g., Salivette<sup>®</sup>) or passive drool. There has been some speculation that the plain saliva cotton collection device may interfere with determination of DHEA-S by enzyme immunoassay (EIA) bringing this saliva collection method into question. Because of the increasing popularity of salivary biomarker research, we sought to determine whether the cotton swab interferes with DHEA-S determination through EIA techniques.</p> <p>Findings</p> <p>Fifty-six healthy young adult men and women aged 18-30 years came to the lab in the morning (0800 hrs; 14 men, 14 women) or late afternoon (1600 hrs; 14 men, 14 women) and provided saliva samples via cotton Salivette and passive drool. Passive drool collection was taken first to minimize particle cross contamination from the cotton swab. Samples were assayed for DHEA-S in duplicate using a commercially available kit (DSL, Inc., Webster, TX). DHEA-S levels collected via Salivette and passive drool were positively correlated (r = + 0.83, p < 0.05). Mean DHEA-S levels were not significantly different between collection methods. Salivary DHEA-S levels were significantly higher in males than in females, regardless of saliva collection method (p < 0.05), and morning DHEA-S values were higher than evening levels (p < 0.05).</p> <p>Conclusions</p> <p>Results suggest that DHEA-S can be measured accurately using passive drool or cotton Salivette collection methods. Results also suggest that DHEA-S levels change across the day and that future studies need to take this time of day difference into account when measuring DHEA-S.</p
Recommended from our members
Ignoring Puff Counts: Another Shortcoming of the Federal Trade Commission Cigarette Testing Programme
OBJECTIVES; To examine reasons behind the failure of the Federal Trade Commission (FTC) to preserve puff count information from standard cigarette testing and to elucidate the importance of puff count to overall tar yields.METHODS; We reviewed industry documents on origins of the FTC test and data sets provided by the Tobacco Institute Testing Laboratory to the tobacco industry and FTC for reporting purposes.RESULTS; The majority of the tobacco industry argued for "dual reporting" of tar yields-both per cigarette and per puff. Despite a request from the Tobacco Institute in 1967 that puff count information be preserved, documents and recent communications with the FTC indicate that puff number data have not been maintained by the government. In contrast, for the cigarette industry, puff count data are a fundamental and routine part of testing and important to cigarette design. A sample of puff counts for cigarettes tested in 1996 (n = 471) shows that on average 100 mm cigarettes have 18% more puffs taken on them than do 85 mm cigarettes in standard tests (7.66 vs 9.03; p<0.01). The 10th percentile puff count is 6.8 and the 90th percentile is 8.8 for king size; the 10th percentile puff count is 8.2 and the 90th percentile is 10.0 for 100 mm cigarettes, indicating that puff counts can vary substantially among brands.CONCLUSIONS; The FTC has failed to seek or preserve puff count information that the industry finds important. Any standard test of tar and nicotine yields should at minimum preserve puff count information
BioPortal: ontologies and integrated data resources at the click of a mouse
Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural-language processing and decision support. BioPortal (http://bioportal.bioontology.org) is an open repository of biomedical ontologies that provides access via Web services and Web browsers to ontologies developed in OWL, RDF, OBO format and Protégé frames. BioPortal functionality includes the ability to browse, search and visualize ontologies. The Web interface also facilitates community-based participation in the evaluation and evolution of ontology content by providing features to add notes to ontology terms, mappings between terms and ontology reviews based on criteria such as usability, domain coverage, quality of content, and documentation and support. BioPortal also enables integrated search of biomedical data resources such as the Gene Expression Omnibus (GEO), ClinicalTrials.gov, and ArrayExpress, through the annotation and indexing of these resources with ontologies in BioPortal. Thus, BioPortal not only provides investigators, clinicians, and developers ‘one-stop shopping’ to programmatically access biomedical ontologies, but also provides support to integrate data from a variety of biomedical resources
Infectious Disease Ontology
Technological developments have resulted in tremendous increases in the volume and diversity of the data and information that must be processed in the course of biomedical and clinical research and practice. Researchers are at the same time under ever greater pressure to share data and to take steps to ensure that data resources are interoperable. The use of ontologies to annotate data has proven successful in supporting these goals and in providing new possibilities for the automated processing of data and information. In this chapter, we describe different types of vocabulary resources and emphasize those features of formal ontologies that make them most useful for computational applications. We describe current uses of ontologies and discuss future goals for ontology-based computing, focusing on its use in the field of infectious diseases. We review the largest and most widely used vocabulary resources relevant to the study of infectious diseases and conclude with a description of the Infectious Disease Ontology (IDO) suite of interoperable ontology modules that together cover the entire infectious disease domain
Protégé: A Tool for Managing and Using Terminology in Radiology Applications
The development of standard terminologies such as RadLex is becoming important in radiology applications, such as structured reporting, teaching file authoring, report indexing, and text mining. The development and maintenance of these terminologies are challenging, however, because there are few specialized tools to help developers to browse, visualize, and edit large taxonomies. Protégé (http://protege.stanford.edu) is an open-source tool that allows developers to create and to manage terminologies and ontologies. It is more than a terminology-editing tool, as it also provides a platform for developers to use the terminologies in end-user applications. There are more than 70,000 registered users of Protégé who are using the system to manage terminologies and ontologies in many different domains. The RadLex project has recently adopted Protégé for managing its radiology terminology. Protégé provides several features particularly useful to managing radiology terminologies: an intuitive graphical user interface for navigating large taxonomies, visualization components for viewing complex term relationships, and a programming interface so developers can create terminology-driven radiology applications. In addition, Protégé has an extensible plug-in architecture, and its large user community has contributed a rich library of components and extensions that provide much additional useful functionalities. In this report, we describe Protégé’s features and its particular advantages in the radiology domain in the creation, maintenance, and use of radiology terminology
A simple spreadsheet-based, MIAME-supportive format for microarray data: MAGE-TAB
BACKGROUND: Sharing of microarray data within the research community has been greatly facilitated by the development of the disclosure and communication standards MIAME and MAGE-ML by the MGED Society. However, the complexity of the MAGE-ML format has made its use impractical for laboratories lacking dedicated bioinformatics support. RESULTS: We propose a simple tab-delimited, spreadsheet-based format, MAGE-TAB, which will become a part of the MAGE microarray data standard and can be used for annotating and communicating microarray data in a MIAME compliant fashion. CONCLUSION: MAGE-TAB will enable laboratories without bioinformatics experience or support to manage, exchange and submit well-annotated microarray data in a standard format using a spreadsheet. The MAGE-TAB format is self-contained, and does not require an understanding of MAGE-ML or XML
Molecular, phenotypic, and sample-associated data to describe pluripotent stem cell lines and derivatives
The use of induced pluripotent stem cells (iPSC) derived from independent patients and sources holds considerable promise to improve the understanding of development and disease. However, optimized use of iPSC depends on our ability to develop methods to efficiently qualify cell lines and protocols, monitor genetic stability, and evaluate self-renewal and differentiation potential. To accomplish these goals, 57 stem cell lines from 10 laboratories were differentiated to 7 different states, resulting in 248 analyzed samples. Cell lines were differentiated and characterized at a central laboratory using standardized cell culture methodologies, protocols, and metadata descriptors. Stem cell and derived differentiated lines were characterized using RNA-seq, miRNA-seq, copy number arrays, DNA methylation arrays, flow cytometry, and molecular histology. All materials, including raw data, metadata, analysis and processing code, and methodological and provenance documentation are publicly available for re-use and interactive exploration at https://www.synapse.org/pcbc. The goal is to provide data that can improve our ability to robustly and reproducibly use human pluripotent stem cells to understand development and disease
The Genopolis Microarray Database
<p>Abstract</p> <p>Background</p> <p>Gene expression databases are key resources for microarray data management and analysis and the importance of a proper annotation of their content is well understood.</p> <p>Public repositories as well as microarray database systems that can be implemented by single laboratories exist. However, there is not yet a tool that can easily support a collaborative environment where different users with different rights of access to data can interact to define a common highly coherent content. The scope of the Genopolis database is to provide a resource that allows different groups performing microarray experiments related to a common subject to create a common coherent knowledge base and to analyse it. The Genopolis database has been implemented as a dedicated system for the scientific community studying dendritic and macrophage cells functions and host-parasite interactions.</p> <p>Results</p> <p>The Genopolis Database system allows the community to build an object based MIAME compliant annotation of their experiments and to store images, raw and processed data from the Affymetrix GeneChip<sup>® </sup>platform. It supports dynamical definition of controlled vocabularies and provides automated and supervised steps to control the coherence of data and annotations. It allows a precise control of the visibility of the database content to different sub groups in the community and facilitates exports of its content to public repositories. It provides an interactive users interface for data analysis: this allows users to visualize data matrices based on functional lists and sample characterization, and to navigate to other data matrices defined by similarity of expression values as well as functional characterizations of genes involved. A collaborative environment is also provided for the definition and sharing of functional annotation by users.</p> <p>Conclusion</p> <p>The Genopolis Database supports a community in building a common coherent knowledge base and analyse it. This fills a gap between a local database and a public repository, where the development of a common coherent annotation is important. In its current implementation, it provides a uniform coherently annotated dataset on dendritic cells and macrophage differentiation.</p
Facilitating the development of controlled vocabularies for metabolomics technologies with text mining
BACKGROUND: Many bioinformatics applications rely on controlled vocabularies or ontologies to consistently interpret and seamlessly integrate information scattered across public resources. Experimental data sets from metabolomics studies need to be integrated with one another, but also with data produced by other types of omics studies in the spirit of systems biology, hence the pressing need for vocabularies and ontologies in metabolomics. However, it is time-consuming and non trivial to construct these resources manually. RESULTS: We describe a methodology for rapid development of controlled vocabularies, a study originally motivated by the needs for vocabularies describing metabolomics technologies. We present case studies involving two controlled vocabularies (for nuclear magnetic resonance spectroscopy and gas chromatography) whose development is currently underway as part of the Metabolomics Standards Initiative. The initial vocabularies were compiled manually, providing a total of 243 and 152 terms. A total of 5,699 and 2,612 new terms were acquired automatically from the literature. The analysis of the results showed that full-text articles (especially the Materials and Methods sections) are the major source of technology-specific terms as opposed to paper abstracts. CONCLUSIONS: We suggest a text mining method for efficient corpus-based term acquisition as a way of rapidly expanding a set of controlled vocabularies with the terms used in the scientific literature. We adopted an integrative approach, combining relatively generic software and data resources for time- and cost-effective development of a text mining tool for expansion of controlled vocabularies across various domains, as a practical alternative to both manual term collection and tailor-made named entity recognition methods
- …