109 research outputs found

    Using registries to integrate bioinformatics tools and services into workbench environments

    Get PDF
    The diversity and complexity of bioinformatics resources presents significant challenges to their localisation, deployment and use, creating a need for reliable systems that address these issues. Meanwhile, users demand increasingly usable and integrated ways to access and analyse data, especially within convenient, integrated “workbench” environments. Resource descriptions are the core element of registry and workbench systems, which are used to both help the user find and comprehend available software tools, data resources, and Web Services, and to localise, execute and combine them. The descriptions are, however, hard and expensive to create and maintain, because they are volatile and require an exhaustive knowledge of the described resource, its applicability to biological research, and the data model and syntax used to describe it. We present here the Workbench Integration Enabler, a software component that will ease the integration of bioinformatics resources in a workbench environment, using their description provided by the existing ELIXIR Tools and Data Services Registry

    Tools and data services registry: a community effort to document bioinformatics resources

    Get PDF
    Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand. Here we present a community-driven curation effort, supported by ELIXIR––the European infrastructure for biological information––that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners. As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools.publishedVersio

    Using bio.tools to generate and annotate workbench tool descriptions

    Get PDF
    Workbench and workflow systems such as Galaxy, Taverna, Chipster, or Common Workflow Language (CWL)-based frameworks, facilitate the access to bioinformatics tools in a user-friendly, scalable and reproducible way. Still, the integration of tools in such environments remains a cumbersome, time consuming and error-prone process. A major consequence is the incomplete or outdated description of tools that are often missing important information, including parameters and metadata such as publication or links to documentation. ToolDog (Tool DescriptiOn Generator) facilitates the integration of tools - which have been registered in the ELIXIR tools registry (https://bio.tools) - into workbench environments by generating tool description templates. ToolDog includes two modules. The first module analyses the source code of the bioinformatics software with language-specific plugins, and generates a skeleton for a Galaxy XML or CWL tool description. The second module is dedicated to the enrichment of the generated tool description, using metadata provided by bio.tools. This last module can also be used on its own to complete or correct existing tool descriptions with missing metadata

    Methods Included:Standardizing Computational Reuse and Portability with the Common Workflow Language

    Get PDF
    A widely used standard for portable multilingual data analysis pipelines would enable considerable benefits to scholarly publication reuse, research/industry collaboration, regulatory cost control, and to the environment. Published research that used multiple computer languages for their analysis pipelines would include a complete and reusable description of that analysis that is runnable on a diverse set of computing environments. Researchers would be able to easier collaborate and reuse these pipelines, adding or exchanging components regardless of programming language used; collaborations with and within the industry would be easier; approval of new medical interventions that rely on such pipelines would be faster. Time will be saved and environmental impact would also be reduced, as these descriptions contain enough information for advanced optimization without user intervention. Workflows are widely used in data analysis pipelines, enabling innovation and decision-making for the modern society. In many domains the analysis components are numerous and written in multiple different computer languages by third parties. However, lacking a standard for reusable and portable multilingual workflows, then reusing published multilingual workflows, collaborating on open problems, and optimizing their execution would be severely hampered. Moreover, only a standard for multilingual data analysis pipelines that was widely used would enable considerable benefits to research-industry collaboration, regulatory cost control, and to preserving the environment. Prior to the start of the CWL project, there was no standard for describing multilingual analysis pipelines in a portable and reusable manner. Even today / currently, although there exist hundreds of single-vendor and other single-source systems that run workflows, none is a general, community-driven, and consensus-built standard

    BOSC 2022: the first hybrid and 23rd annual Bioinformatics Open Source Conference

    Get PDF
    The 23 rd annual Bioinformatics Open Source Conference (BOSC 2022) was part of this year's conference on Intelligent Systems for Molecular Biology (ISMB). Launched in 2000 and held every year since, BOSC is the premier meeting covering open source bioinformatics and open science. ISMB 2022 was, for the first time, a hybrid conference, with the in-person component hosted in Madison, Wisconsin (USA). About 1000 people attended ISMB 2022 in person, with another 800 online. Approximately 200 people participated in BOSC sessions, which included 28 talks chosen from submitted abstracts, 46 posters, and a panel discussion, "Building and Sustaining Inclusive Open Science Communities". BOSC 2022 included joint keynotes with two other COSIs. Jason Williams gave a BOSC / Education COSI keynote entitled "Riding the bicycle: Including all scientists on a path to excellence". A joint session with Bio-Ontologies featured a keynote by Melissa Haendel, "The open data highway: turbo-boosting translational traffic with ontologies.

    WorkflowHub: a registry for computational workflows

    Get PDF
    The rising popularity of computational workflows is driven by the need for repetitive and scalable data processing, sharing of processing know-how, and transparent methods. As both combined records of analysis and descriptions of processing steps, workflows should be reproducible, reusable, adaptable, and available. Workflow sharing presents opportunities to reduce unnecessary reinvention, promote reuse, increase access to best practice analyses for non-experts, and increase productivity. In reality, workflows are scattered and difficult to find, in part due to the diversity of available workflow engines and ecosystems, and because workflow sharing is not yet part of research practice. WorkflowHub provides a unified registry for all computational workflows that links to community repositories, and supports both the workflow lifecycle and making workflows findable, accessible, interoperable, and reusable (FAIR). By interoperating with diverse platforms, services, and external registries, WorkflowHub adds value by supporting workflow sharing, explicitly assigning credit, enhancing FAIRness, and promoting workflows as scholarly artefacts. The registry has a global reach, with hundreds of research organisations involved, and more than 700 workflows registered

    Tools and data services registry: a community effort to document bioinformatics resources

    Get PDF
    Life sciences are yielding huge data sets that underpin scientific discoveries fundamental to improvement in human health, agriculture and the environment. In support of these discoveries, a plethora of databases and tools are deployed, in technically complex and diverse implementations, across a spectrum of scientific disciplines. The corpus of documentation of these resources is fragmented across the Web, with much redundancy, and has lacked a common standard of information. The outcome is that scientists must often struggle to find, understand, compare and use the best resources for the task at hand. Here we present a community-driven curation effort, supported by ELIXIR—the European infrastructure for biological information—that aspires to a comprehensive and consistent registry of information about bioinformatics resources. The sustainable upkeep of this Tools and Data Services Registry is assured by a curation effort driven by and tailored to local needs, and shared amongst a network of engaged partners. As of November 2015, the registry includes 1785 resources, with depositions from 126 individual registrations including 52 institutional providers and 74 individuals. With community support, the registry can become a standard for dissemination of information about bioinformatics resources: we welcome everyone to join us in this common endeavour. The registry is freely available at https://bio.tools

    Rapid response to the M_w 4.9 earthquake of November 11, 2019 in Le Teil, Lower RhĂ´ne Valley, France

    Get PDF
    On November 11, 2019, a Mw 4.9 earthquake hit the region close to Montelimar (lower RhĂ´ne Valley, France), on the eastern margin of the Massif Central close to the external part of the Alps. Occuring in a moderate seismicity area, this earthquake is remarkable for its very shallow focal depth (between 1 and 3 km), its magnitude, and the moderate to large damages it produced in several villages. InSAR interferograms indicated a shallow rupture about 4 km long reaching the surface and the reactivation of the ancient NE-SW La Rouviere normal fault in reverse faulting in agreement with the present-day E-W compressional tectonics. The peculiarity of this earthquake together with a poor coverage of the epicentral region by permanent seismological and geodetic stations triggered the mobilisation of the French post-seismic unit and the broad French scientific community from various institutions, with the deployment of geophysical instruments (seismological and geodesic stations), geological field surveys, and field evaluation of the intensity of the earthquake. Within 7 days after the mainshock, 47 seismological stations were deployed in the epicentral area to improve the Le Teil aftershocks locations relative to the French permanent seismological network (RESIF), monitor the temporal and spatial evolution of microearthquakes close to the fault plane and temporal evolution of the seismic response of 3 damaged historical buildings, and to study suspected site effects and their influence in the distribution of seismic damage. This seismological dataset, completed by data owned by different institutions, was integrated in a homogeneous archive and distributed through FDSN web services by the RESIF data center. This dataset, together with observations of surface rupture evidences, geologic, geodetic and satellite data, will help to unravel the causes and rupture mechanism of this earthquake, and contribute to account in seismic hazard assessment for earthquakes along the major regional CĂ©venne fault system in a context of present-day compressional tectonics

    ELIXIR and Toxicology: a community in development [version 2; peer review: 2 approved]

    Get PDF
    Toxicology has been an active research field for many decades, with academic, industrial and government involvement. Modern omics and computational approaches are changing the field, from merely disease-specific observational models into target-specific predictive models. Traditionally, toxicology has strong links with other fields such as biology, chemistry, pharmacology, and medicine. With the rise of synthetic and new engineered materials, alongside ongoing prioritisation needs in chemical risk assessment for existing chemicals, early predictive evaluations are becoming of utmost importance to both scientific and regulatory purposes. ELIXIR is an intergovernmental organisation that brings together life science resources from across Europe. To coordinate the linkage of various life science efforts around modern predictive toxicology, the establishment of a new ELIXIR Community is seen as instrumental. In the past few years, joint efforts, building on incidental overlap, have been piloted in the context of ELIXIR. For example, the EU-ToxRisk, diXa, HeCaToS, transQST, and the nanotoxicology community have worked with the ELIXIR TeSS, Bioschemas, and Compute Platforms and activities. In 2018, a core group of interested parties wrote a proposal, outlining a sketch of what this new ELIXIR Toxicology Community would look like. A recent workshop (held September 30th to October 1st, 2020) extended this into an ELIXIR Toxicology roadmap and a shortlist of limited investment-high gain collaborations to give body to this new community. This Whitepaper outlines the results of these efforts and defines our vision of the ELIXIR Toxicology Community and how it complements other ELIXIR activities
    • …
    corecore