49,007 research outputs found

    Chemical information matters: an e-Research perspective on information and data sharing in the chemical sciences

    No full text
    Recently, a number of organisations have called for open access to scientific information and especially to the data obtained from publicly funded research, among which the Royal Society report and the European Commission press release are particularly notable. It has long been accepted that building research on the foundations laid by other scientists is both effective and efficient. Regrettably, some disciplines, chemistry being one, have been slow to recognise the value of sharing and have thus been reluctant to curate their data and information in preparation for exchanging it. The very significant increases in both the volume and the complexity of the datasets produced has encouraged the expansion of e-Research, and stimulated the development of methodologies for managing, organising, and analysing "big data". We review the evolution of cheminformatics, the amalgam of chemistry, computer science, and information technology, and assess the wider e-Science and e-Research perspective. Chemical information does matter, as do matters of communicating data and collaborating with data. For chemistry, unique identifiers, structure representations, and property descriptors are essential to the activities of sharing and exchange. Open science entails the sharing of more than mere facts: for example, the publication of negative outcomes can facilitate better understanding of which synthetic routes to choose, an aspiration of the Dial-a-Molecule Grand Challenge. The protagonists of open notebook science go even further and exchange their thoughts and plans. We consider the concepts of preservation, curation, provenance, discovery, and access in the context of the research lifecycle, and then focus on the role of metadata, particularly the ontologies on which the emerging chemical Semantic Web will depend. Among our conclusions, we present our choice of the "grand challenges" for the preservation and sharing of chemical information

    Science in the New Zealand Curriculum e-in-science

    Get PDF
    This milestone report explores some innovative possibilities for e-in-science practice to enhance teacher capability and increase student engagement and achievement. In particular, this report gives insights into how e-learning might be harnessed to help create a future-oriented science education programme. “Innovative” practices are considered to be those that integrate (or could integrate) digital technologies in science education in ways that are not yet commonplace. “Future-oriented education” refers to the type of education that students in the “knowledge age” are going to need. While it is not yet clear exactly what this type of education might look like, it is clear that it will be different from the current system. One framework used to differentiate between these kinds of education is the evolution of education from Education 1.0 to Education 2.0 and 3.0 (Keats & Schmidt, 2007). Education 1.0, like Web 1.0, is considered to be largely a one-way process. Students “get” knowledge from their teachers or other information sources. Education 2.0, as defined by Keats and Schmidt, happens when Web 2.0 technologies are used to enhance traditional approaches to education. New interactive media, such as blogs, social bookmarking, etc. are used, but the process of education itself does not differ significantly from Education 1.0. Education 3.0, by contrast, is characterised by rich, cross-institutional, cross-cultural educational opportunities. The learners themselves play a key role as creators of knowledge artefacts, and distinctions between artefacts, people and processes become blurred, as do distinctions of space and time. Across these three “generations”, the teacher’s role changes from one of knowledge source (Education 1.0) to guide and knowledge source (Education 2.0) to orchestrator of collaborative knowledge creation (Education 3.0). The nature of the learner’s participation in the learning also changes from being largely passive to becoming increasingly active: the learner co-creates resources and opportunities and has a strong sense of ownership of his or her own education. In addition, the participation by communities outside the traditional education system increases. Building from this framework, we offer our own “framework for future-oriented science education” (see Figure 1). In this framework, we present two continua: one reflects the nature of student participation (from minimal to transformative) and the other reflects the nature of community participation (also from minimal to transformative). Both continua stretch from minimal to transformative participation. Minimal participation reflects little or no input by the student/community into the direction of the learning—what is learned, how it is learned and how what is learned will be assessed. Transformative participation, in contrast, represents education where the student or community drives the direction of the learning, including making decisions about content, learning approaches and assessment

    Multiplierz: An Extensible API Based Desktop Environment for Proteomics Data Analysis

    Get PDF
    BACKGROUND. Efficient analysis of results from mass spectrometry-based proteomics experiments requires access to disparate data types, including native mass spectrometry files, output from algorithms that assign peptide sequence to MS/MS spectra, and annotation for proteins and pathways from various database sources. Moreover, proteomics technologies and experimental methods are not yet standardized; hence a high degree of flexibility is necessary for efficient support of high- and low-throughput data analytic tasks. Development of a desktop environment that is sufficiently robust for deployment in data analytic pipelines, and simultaneously supports customization for programmers and non-programmers alike, has proven to be a significant challenge. RESULTS. We describe multiplierz, a flexible and open-source desktop environment for comprehensive proteomics data analysis. We use this framework to expose a prototype version of our recently proposed common API (mzAPI) designed for direct access to proprietary mass spectrometry files. In addition to routine data analytic tasks, multiplierz supports generation of information rich, portable spreadsheet-based reports. Moreover, multiplierz is designed around a "zero infrastructure" philosophy, meaning that it can be deployed by end users with little or no system administration support. Finally, access to multiplierz functionality is provided via high-level Python scripts, resulting in a fully extensible data analytic environment for rapid development of custom algorithms and deployment of high-throughput data pipelines. CONCLUSION. Collectively, mzAPI and multiplierz facilitate a wide range of data analysis tasks, spanning technology development to biological annotation, for mass spectrometry-based proteomics research.Dana-Farber Cancer Institute; National Human Genome Research Institute (P50HG004233); National Science Foundation Integrative Graduate Education and Research Traineeship grant (DGE-0654108

    XML in Motion from Genome to Drug

    Get PDF
    Information technology (IT) has emerged as a central to the solution of contemporary genomics and drug discovery problems. Researchers involved in genomics, proteomics, transcriptional profiling, high throughput structure determination, and in other sub-disciplines of bioinformatics have direct impact on this IT revolution. As the full genome sequences of many species, data from structural genomics, micro-arrays, and proteomics became available, integration of these data to a common platform require sophisticated bioinformatics tools. Organizing these data into knowledgeable databases and developing appropriate software tools for analyzing the same are going to be major challenges. XML (eXtensible Markup Language) forms the backbone of biological data representation and exchange over the internet, enabling researchers to aggregate data from various heterogeneous data resources. The present article covers a comprehensive idea of the integration of XML on particular type of biological databases mainly dealing with sequence-structure-function relationship and its application towards drug discovery. This e-medical science approach should be applied to other scientific domains and the latest trend in semantic web applications is also highlighted

    Communication and re-use of chemical information in bioscience.

    Get PDF
    The current methods of publishing chemical information in bioscience articles are analysed. Using 3 papers as use-cases, it is shown that conventional methods using human procedures, including cut-and-paste are time-consuming and introduce errors. The meaning of chemical terms and the identity of compounds is often ambiguous. valuable experimental data such as spectra and computational results are almost always omitted. We describe an Open XML architecture at proof-of-concept which addresses these concerns. Compounds are identified through explicit connection tables or links to persistent Open resources such as PubChem. It is argued that if publishers adopt these tools and protocols, then the quality and quantity of chemical information available to bioscientists will increase and the authors, publishers and readers will find the process cost-effective.An article submitted to BiomedCentral Bioinformatics, created on request with their Publicon system. The transformed manuscript is archived as PDF. Although it has been through the publishers system this is purely automatic and the contents are those of a pre-refereed preprint. The formatting is provided by the system and tables and figures appear at the end. An accommpanying submission, http://www.dspace.cam.ac.uk/handle/1810/34580, describes the rationale and cultural aspects of publishing , abstracting and aggregating chemical information. BMC is an Open Access publisher and we emphasize that all content is re-usable under Creative Commons Licens

    Generating spherical multiquadrangulations by restricted vertex splittings and the reducibility of equilibrium classes

    Get PDF
    A quadrangulation is a graph embedded on the sphere such that each face is bounded by a walk of length 4, parallel edges allowed. All quadrangulations can be generated by a sequence of graph operations called vertex splitting, starting from the path P_2 of length 2. We define the degree D of a splitting S and consider restricted splittings S_{i,j} with i <= D <= j. It is known that S_{2,3} generate all simple quadrangulations. Here we investigate the cases S_{1,2}, S_{1,3}, S_{1,1}, S_{2,2}, S_{3,3}. First we show that the splittings S_{1,2} are exactly the monotone ones in the sense that the resulting graph contains the original as a subgraph. Then we show that they define a set of nontrivial ancestors beyond P_2 and each quadrangulation has a unique ancestor. Our results have a direct geometric interpretation in the context of mechanical equilibria of convex bodies. The topology of the equilibria corresponds to a 2-coloured quadrangulation with independent set sizes s, u. The numbers s, u identify the primary equilibrium class associated with the body by V\'arkonyi and Domokos. We show that both S_{1,1} and S_{2,2} generate all primary classes from a finite set of ancestors which is closely related to their geometric results. If, beyond s and u, the full topology of the quadrangulation is considered, we arrive at the more refined secondary equilibrium classes. As Domokos, L\'angi and Szab\'o showed recently, one can create the geometric counterparts of unrestricted splittings to generate all secondary classes. Our results show that S_{1,2} can only generate a limited range of secondary classes from the same ancestor. The geometric interpretation of the additional ancestors defined by monotone splittings shows that minimal polyhedra play a key role in this process. We also present computational results on the number of secondary classes and multiquadrangulations.Comment: 21 pages, 11 figures and 3 table

    Quantifying Inactive Lithium in Lithium Metal Batteries

    Get PDF
    Inactive lithium (Li) formation is the immediate cause of capacity loss and catastrophic failure of Li metal batteries. However, the chemical component and the atomic level structure of inactive Li have rarely been studied due to the lack of effective diagnosis tools to accurately differentiate and quantify Li+ in solid electrolyte interphase (SEI) components and the electrically isolated unreacted metallic Li0, which together comprise the inactive Li. Here, by introducing a new analytical method, Titration Gas Chromatography (TGC), we can accurately quantify the contribution from metallic Li0 to the total amount of inactive Li. We uncover that the Li0, rather than the electrochemically formed SEI, dominates the inactive Li and capacity loss. Using cryogenic electron microscopies to further study the microstructure and nanostructure of inactive Li, we find that the Li0 is surrounded by insulating SEI, losing the electronic conductive pathway to the bulk electrode. Coupling the measurements of the Li0 global content to observations of its local atomic structure, we reveal the formation mechanism of inactive Li in different types of electrolytes, and identify the true underlying cause of low Coulombic efficiency in Li metal deposition and stripping. We ultimately propose strategies to enable the highly efficient Li deposition and stripping to enable Li metal anode for next generation high energy batteries
    • 

    corecore