159 research outputs found

    Testosterone Trajectories and Reference Ranges in a Large Longitudinal Sample of Male Adolescents

    Get PDF
    Pubertal dynamics plays an important role in physical and psychological development of children and adolescents. We aim to provide reference ranges of plasma testosterone in a large longitudinal sample. Furthermore, we describe a measure of testosterone trajectories during adolescence that can be used in future investigations of development.We carried out longitudinal measurements of plasma testosterone in 2,216 samples obtained from 513 males (9 to 17 years of age) from the Avon Longitudinal Study of Parents and Children. We used integration of a model fitted to each participant's testosterone trajectory to calculate a measure of average exposure to testosterone over adolescence. We pooled these data with corresponding values reported in the literature to provide a reference range of testosterone levels in males between the ages of 6 and 19 years.The average values of total testosterone in the ALSPAC sample range from 0.82 nmol/L (Standard Deviation [SD]: 0.09) at 9 years of age to 16.5 (SD: 2.65) nmol/L at 17 years of age; these values are congruent with other reports in the literature. The average exposure to testosterone is associated with different features of testosterone trajectories such as Peak Testosterone Change, Age at Peak Testosterone Change, and Testosterone at 17 years of age as well as the timing of the growth spurt during puberty.The average exposure to testosterone is a useful measure for future investigations using testosterone trajectories to examine pubertal dynamics

    Method Improving Reading Comprehension In Primary Education Program Students

    Get PDF
    Abstract This study aims to determine the influence of reading comprehension skills of English for PGSD students through the application of SQ3R learning method. The type of this research is Pre-Experimental research because it is not yet a real experiment, there are external variables that influence the formation of a dependent variable, this is because there is no control variable and the sample is not chosen randomly. The research design is used is onegroup pretest-post-test design involving one group that is an experimental group. In this design, the observation is done twice before and after the experiment. Observations made before the experiment (O1) are called pretests and the post-experimental observation (O2) is called posttest. The difference between O1 and O2 ie O2 - O1 is the effect of the treatment. The results showed that there was an improvement in reading comprehension skills of PGSD students in Class M.4.3 using SQ3R method, and better SQ3R enabling SQ3R to improve English comprehension skills. Keywords: reading comprehension skills, application of SQ3R metho

    A simple spreadsheet-based, MIAME-supportive format for microarray data: MAGE-TAB

    Get PDF
    BACKGROUND: Sharing of microarray data within the research community has been greatly facilitated by the development of the disclosure and communication standards MIAME and MAGE-ML by the MGED Society. However, the complexity of the MAGE-ML format has made its use impractical for laboratories lacking dedicated bioinformatics support. RESULTS: We propose a simple tab-delimited, spreadsheet-based format, MAGE-TAB, which will become a part of the MAGE microarray data standard and can be used for annotating and communicating microarray data in a MIAME compliant fashion. CONCLUSION: MAGE-TAB will enable laboratories without bioinformatics experience or support to manage, exchange and submit well-annotated microarray data in a standard format using a spreadsheet. The MAGE-TAB format is self-contained, and does not require an understanding of MAGE-ML or XML

    Adding a Little Reality to Building Ontologies for Biology

    Get PDF
    BACKGROUND: Many areas of biology are open to mathematical and computational modelling. The application of discrete, logical formalisms defines the field of biomedical ontologies. Ontologies have been put to many uses in bioinformatics. The most widespread is for description of entities about which data have been collected, allowing integration and analysis across multiple resources. There are now over 60 ontologies in active use, increasingly developed as large, international collaborations. There are, however, many opinions on how ontologies should be authored; that is, what is appropriate for representation. Recently, a common opinion has been the "realist" approach that places restrictions upon the style of modelling considered to be appropriate. METHODOLOGY/PRINCIPAL FINDINGS: Here, we use a number of case studies for describing the results of biological experiments. We investigate the ways in which these could be represented using both realist and non-realist approaches; we consider the limitations and advantages of each of these models. CONCLUSIONS/SIGNIFICANCE: From our analysis, we conclude that while realist principles may enable straight-forward modelling for some topics, there are crucial aspects of science and the phenomena it studies that do not fit into this approach; realism appears to be over-simplistic which, perversely, results in overly complex ontological models. We suggest that it is impossible to avoid compromise in modelling ontology; a clearer understanding of these compromises will better enable appropriate modelling, fulfilling the many needs for discrete mathematical models within computational biology

    The Genopolis Microarray Database

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Gene expression databases are key resources for microarray data management and analysis and the importance of a proper annotation of their content is well understood.</p> <p>Public repositories as well as microarray database systems that can be implemented by single laboratories exist. However, there is not yet a tool that can easily support a collaborative environment where different users with different rights of access to data can interact to define a common highly coherent content. The scope of the Genopolis database is to provide a resource that allows different groups performing microarray experiments related to a common subject to create a common coherent knowledge base and to analyse it. The Genopolis database has been implemented as a dedicated system for the scientific community studying dendritic and macrophage cells functions and host-parasite interactions.</p> <p>Results</p> <p>The Genopolis Database system allows the community to build an object based MIAME compliant annotation of their experiments and to store images, raw and processed data from the Affymetrix GeneChip<sup>® </sup>platform. It supports dynamical definition of controlled vocabularies and provides automated and supervised steps to control the coherence of data and annotations. It allows a precise control of the visibility of the database content to different sub groups in the community and facilitates exports of its content to public repositories. It provides an interactive users interface for data analysis: this allows users to visualize data matrices based on functional lists and sample characterization, and to navigate to other data matrices defined by similarity of expression values as well as functional characterizations of genes involved. A collaborative environment is also provided for the definition and sharing of functional annotation by users.</p> <p>Conclusion</p> <p>The Genopolis Database supports a community in building a common coherent knowledge base and analyse it. This fills a gap between a local database and a public repository, where the development of a common coherent annotation is important. In its current implementation, it provides a uniform coherently annotated dataset on dendritic cells and macrophage differentiation.</p

    Facilitating the development of controlled vocabularies for metabolomics technologies with text mining

    Get PDF
    BACKGROUND: Many bioinformatics applications rely on controlled vocabularies or ontologies to consistently interpret and seamlessly integrate information scattered across public resources. Experimental data sets from metabolomics studies need to be integrated with one another, but also with data produced by other types of omics studies in the spirit of systems biology, hence the pressing need for vocabularies and ontologies in metabolomics. However, it is time-consuming and non trivial to construct these resources manually. RESULTS: We describe a methodology for rapid development of controlled vocabularies, a study originally motivated by the needs for vocabularies describing metabolomics technologies. We present case studies involving two controlled vocabularies (for nuclear magnetic resonance spectroscopy and gas chromatography) whose development is currently underway as part of the Metabolomics Standards Initiative. The initial vocabularies were compiled manually, providing a total of 243 and 152 terms. A total of 5,699 and 2,612 new terms were acquired automatically from the literature. The analysis of the results showed that full-text articles (especially the Materials and Methods sections) are the major source of technology-specific terms as opposed to paper abstracts. CONCLUSIONS: We suggest a text mining method for efficient corpus-based term acquisition as a way of rapidly expanding a set of controlled vocabularies with the terms used in the scientific literature. We adopted an integrative approach, combining relatively generic software and data resources for time- and cost-effective development of a text mining tool for expansion of controlled vocabularies across various domains, as a practical alternative to both manual term collection and tailor-made named entity recognition methods
    • …
    corecore