2,804,023 research outputs found

    Science Models as Value-Added Services for Scholarly Information Systems

    Full text link
    The paper introduces scholarly Information Retrieval (IR) as a further dimension that should be considered in the science modeling debate. The IR use case is seen as a validation model of the adequacy of science models in representing and predicting structure and dynamics in science. Particular conceptualizations of scholarly activity and structures in science are used as value-added search services to improve retrieval quality: a co-word model depicting the cognitive structure of a field (used for query expansion), the Bradford law of information concentration, and a model of co-authorship networks (both used for re-ranking search results). An evaluation of the retrieval quality when science model driven services are used turned out that the models proposed actually provide beneficial effects to retrieval quality. From an IR perspective, the models studied are therefore verified as expressive conceptualizations of central phenomena in science. Thus, it could be shown that the IR perspective can significantly contribute to a better understanding of scholarly structures and activities.Comment: 26 pages, to appear in Scientometric

    Modelling the emergent dynamics and major metabolites of the human colonic microbiota

    Get PDF
    Funded by Scottish Government's Rural and Environment Science and Analytical Services Division (RESAS) Acknowledgements We would like to thank Thanasis Vogogias, David Nutter and Alec Mann for their assistance in developing the software for this model. We also acknowledge the Scottish Government’s Rural and Environment Science and Analytical Services Division (RESAS) for their financial support. Furthermore,many thanks go to the two anonymous reviewers whose hard work has greatly improved this paper.Peer reviewedPublisher PD

    Biology postcard

    Get PDF
    Postcard promoting BU Science & Engineering Library staff and services for Biology

    E-Science in the classroom - Towards viability

    Get PDF
    E-Science has the potential to transform school science by enabling learners, teachers and research scientists to engage together in authentic scientific enquiry, collaboration and learning. However, if we are to reap the benefits of this potential as part of everyday teaching and learning, we need to explicitly think about and support the work required to set up and run e-Science experiences within any particular educational context. In this paper, we present a framework for identifying and describing the resources, tools and services necessary to move e-Science into the classroom together with examples of these. This framework is derived from previous experiences conducting educational e-Science projects and systematic analysis of the categories of ‘hidden work’ needed to run these projects (Smith, Underwood, Fitzpatrick, & Luckin, forthcoming). The articulation of resources, tools and services based on these categories provides a starting point for more methodical design and deployment of future educational e- Science projects, reflection on which can also help further develop the framework. It also points to the technological infrastructure from which such tools and services could be built. As such it provides an agenda of work to develop both processes and technologies that would make it practical for teachers to deliver active, and collaborative e-Science learning experiences on a larger scale within and across schools. Routine school e- Science will only be possible if such support is specified, implemented and made available to teachers within their work contexts in an appropriate and usable form

    Astrophysics science operations in the Great Observatories era

    Get PDF
    Plans for Astrophysics science operations during the decade of the nineties are described from the point of view of a scientist who wishes to make a space-borne astronomical observation or to use archival astronomical data. 'Science Operations' include the following: proposal preparation, observation planning and execution, data collection, data processing and analysis, and dissemination of results. For each of these areas of science operations, we derive technology requirements for the next ten to twenty years. The scientist will be able to use a variety of services and infrastructure, including the 'Astrophysics Data System.' The current status and plans for these science operations services are described

    Undernutrition and stage of gestation influence fetal adipose tissue gene expression

    Get PDF
    Funded by the Scottish Government’s Rural and Environment Science and Analytical Services Division (RESAS), including the Strategic Partnership for Animal Science Excellence (SPASE) and the U.S. National Institutes of Health (HD045784). None of the authors had any financial or personal conflicts of interest.Peer reviewedPostprin

    Units of Evidence for Analyzing Subdisciplinary Difference in Data Practice Studies

    Get PDF
    Digital libraries (DLs) are adapting to accommodate research data and related services. The complexities of this new content spans the elements of DL development, and there are questions concerning data selection, service development, and how best to align these with local, institutional initiatives for cyberinfrastructure, data-intensive research, and data stewardship. Small science disciplines are of particular relevance due to the prevalence of this mode of research in the academy, and the anticipated magnitude of data production. To support data acquisition into DLs – and subsequent data reuse – there is a need for new knowledge on the range and complexities inherent in practice-data-curation arrangements for small science research. We present a flexible methodological approach crafted to generate data units to analyze these relationships and facilitate crossdisciplinary comparisons.Library Services (LG-06-07-0032-07) and National Science Foundation (OCI-0830976).is peer reviewe

    World Guide to Technical Information and Documentation Services. United Nations Educational, Scientific and Cultural Organization. Paris: UNESCO, 1969. 287 pp. Hardbound, 6.00;paper,6.00; paper, 4.00 (available in U.S.A. from Unipub. Inc., P.O. Box 433, New York City, N.Y. 10016).

    Get PDF
    Excerpt: This very useful reference volume is a companion to UNESCO\u27s World Guide to Science Information and Documentation Services (1965). It lists and describes the principal centers in each country which provide technical information, either to all investigators or to a restricted clientele. 273 institutions in 73 countries have been included, with an informative yet concise report upon each source. A sampIe entry lists name of repository in the vernacular, English, French and acronym; addresses; brief history; staff; subject coverage; nature of library; nature of abstracting service; whether bibliographies, literature searches or translations are available; information about photoreproduction services; and methods of payment for services. The remarkable proliferation of information sources in science and technology makes such guides not only convenient but necessary

    Intelligent services for big data science

    Get PDF
    Cities are areas where Big Data is having a real impact. Town planners and administration bodies just need the right tools at their fingertips to consume all the data points that a town or city generates and then be able to turn that into actions that improve peoples' lives. In this case, Big Data is definitely a phenomenon that has a direct impact on the quality of life for those of us that choose to live in a town or city. Smart Cities of tomorrow will rely not only on sensors within the city infrastructure, but also on a large number of devices that will willingly sense and integrate their data into technological platforms used for introspection into the habits and situations of individuals and city-large communities. Predictions say that cities will generate over 4.1 terabytes per day per square kilometer of urbanized land area by 2016. Handling efficiently such amounts of data is already a challenge. In this paper we present our solutions designed to support next-generation Big Data applications. We first present CAPIM, a platform designed to automate the process of collecting and aggregating context information on a large scale. It integrates services designed to collect context data (location, user's profile and characteristics, as well as the environment). Later on, we present a concrete implementation of an Intelligent Transportation System designed on top of CAPIM. The application is designed to assist users and city officials better understand traffic problems in large cities. Finally, we present a solution to handle efficient storage of context data on a large scale. The combination of these services provides support for intelligent Smart City applications, for actively and autonomously adaptation and smart provision of services and content, using the advantages of contextual information.Peer ReviewedPostprint (author's final draft

    Provenance-based validation of E-science experiments

    No full text
    E-Science experiments typically involve many distributed services maintained by different organisations. After an experiment has been executed, it is useful for a scientist to verify that the execution was performed correctly or is compatible with some existing experimental criteria or standards. Scientists may also want to review and verify experiments performed by their colleagues. There are no existing frameworks for validating such experiments in today's e-Science systems. Users therefore have to rely on error checking performed by the services, or adopt other ad hoc methods. This paper introduces a platform-independent framework for validating workflow executions. The validation relies on reasoning over the documented provenance of experiment results and semantic descriptions of services advertised in a registry. This validation process ensures experiments are performed correctly, and thus results generated are meaningful. The framework is tested in a bioinformatics application that performs protein compressibility analysis
    corecore