457 research outputs found

    SGABU computational platform for multiscale modeling:Bridging the gap between education and research

    Get PDF
    BACKGROUND AND OBJECTIVE: In accordance with the latest aspirations in the field of bioengineering, there is a need to create a web accessible, but powerful cloud computational platform that combines datasets and multiscale models related to bone modeling, cancer, cardiovascular diseases and tissue engineering. The SGABU platform may become a powerful information system for research and education that can integrate data, extract information, and facilitate knowledge exchange with the goal of creating and developing appropriate computing pipelines to provide accurate and comprehensive biological information from the molecular to organ level. METHODS: The datasets integrated into the platform are obtained from experimental and/or clinical studies and are mainly in tabular or image file format, including metadata. The implementation of multiscale models, is an ambitious effort of the platform to capture phenomena at different length scales, described using partial and ordinary differential equations, which are solved numerically on complex geometries with the use of the finite element method. The majority of the SGABU platform's simulation pipelines are provided as Common Workflow Language (CWL) workflows. Each of them requires creating a CWL implementation on the backend and a user-friendly interface using standard web technologies. Platform is available at https://sgabu-test.unic.kg.ac.rs/login. RESULTS: The main dashboard of the SGABU platform is divided into sections for each field of research, each one of which includes a subsection of datasets and multiscale models. The datasets can be presented in a simple form as tabular data, or using technologies such as Plotly.js for 2D plot interactivity, Kitware Paraview Glance for 3D view. Regarding the models, the usage of Docker containerization for packing the individual tools and CWL orchestration for describing inputs with validation forms and outputs with tabular views for output visualization, interactive diagrams, 3D views and animations. CONCLUSIONS: In practice, the structure of SGABU platform means that any of the integrated workflows can work equally well on any other bioengineering platform. The key advantage of the SGABU platform over similar efforts is its versatility offered with the use of modern, modular, and extensible technology for various levels of architecture.</p

    Work flows in life science

    Get PDF
    The introduction of computer science technology in the life science domain has resulted in a new life science discipline called bioinformatics. Bioinformaticians are biologists who know how to apply computer science technology to perform computer based experiments, also known as in-silico or dry lab experiments. Various tools, such as databases, web applications and scripting languages, are used to design and run in-silico experiments. As the size and complexity of these experiments grow, new types of tools are required to design and execute the experiments and to analyse the results. Workflow systems promise to fulfill this role. The bioinformatician composes an experiment by using tools and web services as building blocks, and connecting them, often through a graphical user interface. Workflow systems, such as Taverna, provide access to up to a few thousand resources in a uniform way. Although workflow systems are intended to make the bioinformaticians' work easier, bioinformaticians experience difficulties in using them. This thesis is devoted to find out which problems bioinformaticians experience using workflow systems and to provide solutions for these problems.\u

    Metadata stewardship in nanosafety research: learning from the past, preparing for an "on-the-fly" FAIR future

    Get PDF
    Introduction: Significant progress has been made in terms of best practice in research data management for nanosafety. Some of the underlying approaches to date are, however, overly focussed on the needs of specific research projects or aligned to a single data repository, and this “silo” approach is hampering their general adoption by the broader research community and individual labs. Methods: State-of-the-art data/knowledge collection, curation management FAIRification, and sharing solutions applied in the nanosafety field are reviewed focusing on unique features, which should be generalised and integrated into a functional FAIRification ecosystem that addresses the needs of both data generators and data (re)users. Results: The development of data capture templates has focussed on standardised single-endpoint Test Guidelines, which does not reflect the complexity of real laboratory processes, where multiple assays are interlinked into an overall study, and where non-standardised assays are developed to address novel research questions and probe mechanistic processes to generate the basis for read-across from one nanomaterial to another. By focussing on the needs of data providers and data users, we identify how existing tools and approaches can be re-framed to enable “on-the-fly” (meta) data definition, data capture, curation and FAIRification, that are sufficiently flexible to address the complexity in nanosafety research, yet harmonised enough to facilitate integration of datasets from different sources generated for different research purposes. By mapping the available tools for nanomaterials safety research (including nanomaterials characterisation, non-standard (mechanistic-focussed) methods, measurement principles and experimental setup, environmental fate and requirements from new research foci such as safe and sustainable by design), a strategy for integration and bridging between silos is presented. The NanoCommons KnowledgeBase has shown how data from different sources can be integrated into a one-stop shop for searching, browsing and accessing data (without copying), and thus how to break the boundaries between data silos. Discussion: The next steps are to generalise the approach by defining a process to build consensus (meta)data standards, develop solutions to make (meta)data more machine actionable (on the fly ontology development) and establish a distributed FAIR data ecosystem maintained by the community beyond specific projects. Since other multidisciplinary domains might also struggle with data silofication, the learnings presented here may be transferable to facilitate data sharing within other communities and support harmonization of approaches across disciplines to prepare the ground for cross-domain interoperability. Visit WorldFAIR online at http://worldfair-project.eu. WorldFAIR is funded by the EC HORIZON-WIDERA-2021-ERA-01-41 Coordination and Support Action under Grant Agreement No. 101058393

    Metadata stewardship in nanosafety research: learning from the past, preparing for an "on-the-fly" FAIR future

    Get PDF
    Introduction: Significant progress has been made in terms of best practice in research data management for nanosafety. Some of the underlying approaches to date are, however, overly focussed on the needs of specific research projects or aligned to a single data repository, and this "silo" approach is hampering their general adoption by the broader research community and individual labs.Methods: State-of-the-art data/knowledge collection, curation management FAIrification, and sharing solutions applied in the nanosafety field are reviewed focusing on unique features, which should be generalised and integrated into a functional FAIRification ecosystem that addresses the needs of both data generators and data (re)users.Results: The development of data capture templates has focussed on standardised single-endpoint Test Guidelines, which does not reflect the complexity of real laboratory processes, where multiple assays are interlinked into an overall study, and where non-standardised assays are developed to address novel research questions and probe mechanistic processes to generate the basis for read-across from one nanomaterial to another. By focussing on the needs of data providers and data users, we identify how existing tools and approaches can be re-framed to enable "on-the-fly" (meta) data definition, data capture, curation and FAIRification, that are sufficiently flexible to address the complexity in nanosafety research, yet harmonised enough to facilitate integration of datasets from different sources generated for different research purposes. By mapping the available tools for nanomaterials safety research (including nanomaterials characterisation, nonstandard (mechanistic-focussed) methods, measurement principles and experimental setup, environmental fate and requirements from new research foci such as safe and sustainable by design), a strategy for integration and bridging between silos is presented. The NanoCommons KnowledgeBase has shown how data from different sources can be integrated into a one-stop shop for searching, browsing and accessing data (without copying), and thus how to break the boundaries between data silos.Discussion: The next steps are to generalise the approach by defining a process to build consensus (meta)data standards, develop solutions to make (meta)data more machine actionable (on the fly ontology development) and establish a distributed FAIR data ecosystem maintained by the community beyond specific projects. Since other multidisciplinary domains might also struggle with data silofication, the learnings presented here may be transferrable to facilitate data sharing within other communities and support harmonization of approaches across disciplines to prepare the ground for cross-domain interoperability

    A Semantic Framework for Declarative and Procedural Knowledge

    Get PDF
    In any scientic domain, the full set of data and programs has reached an-ome status, i.e. it has grown massively. The original article on the Semantic Web describes the evolution of a Web of actionable information, i.e.\ud information derived from data through a semantic theory for interpreting the symbols. In a Semantic Web, methodologies are studied for describing, managing and analyzing both resources (domain knowledge) and applications (operational knowledge) - without any restriction on what and where they\ud are respectively suitable and available in the Web - as well as for realizing automatic and semantic-driven work\ud ows of Web applications elaborating Web resources.\ud This thesis attempts to provide a synthesis among Semantic Web technologies, Ontology Research, Knowledge and Work\ud ow Management. Such a synthesis is represented by Resourceome, a Web-based framework consisting of two components which strictly interact with each other: an ontology-based and domain-independent knowledge manager system (Resourceome KMS) - relying on a knowledge model where resource and operational knowledge are contextualized in any domain - and a semantic-driven work ow editor, manager and agent-based execution system (Resourceome WMS).\ud The Resourceome KMS and the Resourceome WMS are exploited in order to realize semantic-driven formulations of work\ud ows, where activities are semantically linked to any involved resource. In the whole, combining the use of domain ontologies and work ow techniques, Resourceome provides a exible domain and operational knowledge organization, a powerful engine for semantic-driven work\ud ow composition, and a distributed, automatic and\ud transparent environment for work ow execution

    Advancing myxobacterial natural product discovery by combining genome and metabolome mining with organic synthesis

    Get PDF
    Myxobacteria represent a viable source for natural products with a broad variety of chemical scaffolds and intriguing biological activities. This thesis covers different contemporary ways to approach myxobacterial secondary metabolism. The ribosomal peptide myxarylin was discovered through a genome-guided approach. This study describes the discovery, semi-synthesis-assisted isolation, structure elucidation and heterologous production. Furthermore, statistics-based metabolome mining revealed a family of light-sensitive compounds with yet elusive structures. A biosynthetic gene cluster putatively encoding the biosynthetic machinery, could be identified by cluster inactivation experiments. Metabolome mining additionally revealed new myxochelin congeners featuring a rare nicotinic acid moiety. Total synthesis was applied to confirm structures, elucidate the absolute stereochemistry and to generate additional non-natural derivatives. Finally, total synthesis was used to create a small library of sandacrabins, a family of terpenoid-alkaloids that feature promising antiviral activities, with the aim to develop improved congeners with increased target activity and reduced cytotoxicity. The combination of up-to-date approaches in natural products discovery, especially focusing on UHPLC-hrMS workflows, and small-scale organic synthesis was successfully applied to facilitate compound isolation, confirm structures and to create novel congeners of myxobacterial natural products.Myxobakterien sind eine reichhaltige Quelle für neue Naturstoffe mit vielfältigen chemischen Grundgerüsten und faszinierenden biologischen Aktivitäten. Diese Arbeit behandelt verschiedene aktuelle Methoden, den Sekundärstoffwechsel von Myxobakterien zu erschließen. Das ribosomale Peptid Myxarylin wurde mithilfe eines genomgeleiteten Ansatzes entdeckt. Beschrieben wird außerdem die semisynthesegestützte Isolierung, Strukturaufklärung und heterologe Produktion. Mit statistischer Metabolomanalyse wurde eine Familie lichtinstabiler Verbindungen mit bislang unbekannten Strukturen entdeckt. Über Inaktivierungsexperimente konnte ein Gencluster identifiziert werden, das vermutlich die Biosynthesemaschinerie dieser Naturstoffe kodiert. Weiterhin wurden neue Myxochelin-Derivate entdeckt, die sich durch den Einbau von Nikotinsäure auszeichnen. Mittels Totalsynthese konnten die Strukturen inklusive Stereochemie aufgeklärt und weitere Derivate hergestellt werden. Zuletzt wurden neue Derivate der Sandacrabine synthetisiert, eine Familie von Terpenoid-Alkaloiden mit vielversprechender antiviraler Aktivität. Das Ziel dabei ist es, die gewünschte Aktivität zu erhöhen und die Zytotoxizität zu verringern. Im Rahmen dieser Arbeit wurden erfolgreich moderne Ansätze in der Naturstoffforschung, insbesondere UHPLC-hrMS-basierte Methoden, mit organischer Synthese kombiniert, um die Isolierung zu erleichtern, Strukturen zu bestätigen und neue Derivate myxobakterieller Naturstoffe herzustellen

    BioVeL : a virtual laboratory for data analysis and modelling in biodiversity science and ecology

    Get PDF
    Background: Making forecasts about biodiversity and giving support to policy relies increasingly on large collections of data held electronically, and on substantial computational capability and capacity to analyse, model, simulate and predict using such data. However, the physically distributed nature of data resources and of expertise in advanced analytical tools creates many challenges for the modern scientist. Across the wider biological sciences, presenting such capabilities on the Internet (as "Web services") and using scientific workflow systems to compose them for particular tasks is a practical way to carry out robust "in silico" science. However, use of this approach in biodiversity science and ecology has thus far been quite limited. Results: BioVeL is a virtual laboratory for data analysis and modelling in biodiversity science and ecology, freely accessible via the Internet. BioVeL includes functions for accessing and analysing data through curated Web services; for performing complex in silico analysis through exposure of R programs, workflows, and batch processing functions; for on- line collaboration through sharing of workflows and workflow runs; for experiment documentation through reproducibility and repeatability; and for computational support via seamless connections to supporting computing infrastructures. We developed and improved more than 60 Web services with significant potential in many different kinds of data analysis and modelling tasks. We composed reusable workflows using these Web services, also incorporating R programs. Deploying these tools into an easy-to-use and accessible 'virtual laboratory', free via the Internet, we applied the workflows in several diverse case studies. We opened the virtual laboratory for public use and through a programme of external engagement we actively encouraged scientists and third party application and tool developers to try out the services and contribute to the activity. Conclusions: Our work shows we can deliver an operational, scalable and flexible Internet-based virtual laboratory to meet new demands for data processing and analysis in biodiversity science and ecology. In particular, we have successfully integrated existing and popular tools and practices from different scientific disciplines to be used in biodiversity and ecological research.Peer reviewe

    Current Trends and New Challenges of Databases and Web Applications for Systems Driven Biological Research

    Get PDF
    Dynamic and rapidly evolving nature of systems driven research imposes special requirements on the technology, approach, design and architecture of computational infrastructure including database and Web application. Several solutions have been proposed to meet the expectations and novel methods have been developed to address the persisting problems of data integration. It is important for researchers to understand different technologies and approaches. Having familiarized with the pros and cons of the existing technologies, researchers can exploit its capabilities to the maximum potential for integrating data. In this review we discuss the architecture, design and key technologies underlying some of the prominent databases and Web applications. We will mention their roles in integration of biological data and investigate some of the emerging design concepts and computational technologies that are likely to have a key role in the future of systems driven biomedical research
    corecore