291,105 research outputs found

    GPU cards as a low cost solution for efficient and fast classification of high dimensional gene expression datasets

    Get PDF
    The days when bioinformatics tools will be so reliable to become a standard aid in routine clinical diagnostics are getting very close. However, it is important to remember that the more complex and advanced bioinformatics tools become, the more performances are required by the computing platforms. Unfortunately, the cost of High Performance Computing (HPC) platforms is still prohibitive for both public and private medical practices. Therefore, to promote and facilitate the use of bioinformatics tools it is important to identify low-cost parallel computing solutions. This paper presents a successful experience in using the parallel processing capabilities of Graphical Processing Units (GPU) to speed up classification of gene expression profiles. Results show that using open source CUDA programming libraries allows to obtain a significant increase in performances and therefore to shorten the gap between advanced bioinformatics tools and real medical practic

    Development of Integrative Bioinformatics Applications using Cloud Computing resources and Knowledge Organization Systems (KOS).

    Get PDF
    Use of semantic web abstractions, in particular of domain neural Knowledge Organization Systems (KOS), to manage distributed, cloud based, integrative bioinformatics infrastructure. This presentation derives from recent publication:

Almeida JS, Deus HF, Maass W. (2010) S3DB core: a framework for RDF generation and management in bioinformatics infrastructures. BMC Bioinformatics. 2010 Jul 20;11(1):387. [PMID 20646315].

These PowerPoint slides were presented at Semantic Web Applications and Tools for Life Sciences December 10th, 2010, Berlin, Germany (http://www.swat4ls.org/2010/progr.php), keynote 9-10 am

    Applications of next-generation sequencing technologies and computational tools in molecular evolution and aquatic animals conservation studies : a short review

    Get PDF
    Aquatic ecosystems that form major biodiversity hotspots are critically threatened due to environmental and anthropogenic stressors. We believe that, in this genomic era, computational methods can be applied to promote aquatic biodiversity conservation by addressing questions related to the evolutionary history of aquatic organisms at the molecular level. However, huge amounts of genomics data generated can only be discerned through the use of bioinformatics. Here, we examine the applications of next-generation sequencing technologies and bioinformatics tools to study the molecular evolution of aquatic animals and discuss the current challenges and future perspectives of using bioinformatics toward aquatic animal conservation efforts

    MACBenAbim: A Multi-platform Mobile Application for searching keyterms in Computational Biology and Bioinformatics

    Get PDF
    Computational biology and bioinformatics are gradually gaining grounds in Africa and other developing nations of the world. However, in these countries, some of the challenges of computational biology and bioinformatics education are inadequate infrastructures, and lack of readily-available complementary and motivational tools to support learning as well as research. This has lowered the morale of many promising undergraduates, postgraduates and researchers from aspiring to undertake future study in these fields. In this paper, we developed and described MACBenAbim (Multi-platform Mobile Application for Computational Biology and Bioinformatics), a flexible user-friendly tool to search for, define and describe the meanings of keyterms in computational biology and bioinformatics, thus expanding the frontiers of knowledge of the users. This tool also has the capability of achieving visualization of results on a mobile multi-platform context

    Evaluation of Dynamic Cell Processes and Behavior Using Video Bioinformatics Tools

    Get PDF
    Just as body language can reveal a person’s state of well-being, dynamic changes in cell behavior and morphology can be used to monitor processes in cultured cells. This chapter discusses how CL-Quant software, a commercially available video bioinformatics tool, can be used to extract quantitative data on: (1) growth/proliferation, (2) cell and colony migration, (3) reactive oxygen species (ROS) production, and (4) neural differentiation. Protocols created using CL-Quant were used to analyze both single cells and colonies. Time-lapse experiments in which different cell types were subjected to various chemical exposures were done using Nikon BioStations. Proliferation rate was measured in human embryonic stem cell colonies by quantifying colony area (pixels) and in single cells by measuring confluency (pixels). Colony and single cell migration were studied by measuring total displacement (distance between the starting and ending points) and total distance traveled by the colonies/cells. To quantify ROS production, cells were pre-loaded with MitoSOX Red™, a mitochondrial ROS (superoxide) indicator, treated with various chemicals, then total intensity of the red fluorescence was measured in each frame. Lastly, neural stem cells were incubated in differentiation medium for 12 days, and time lapse images were collected daily. Differentiation of neural stem cells was quantified using a protocol that detects young neurons. CLQuant software can be used to evaluate biological processes in living cells, and the protocols developed in this project can be applied to basic research and toxicological studies, or to monitor quality control in culture facilities

    Bioinformatics tools for analysing viral genomic data

    Get PDF
    The field of viral genomics and bioinformatics is experiencing a strong resurgence due to high-throughput sequencing (HTS) technology, which enables the rapid and cost-effective sequencing and subsequent assembly of large numbers of viral genomes. In addition, the unprecedented power of HTS technologies has enabled the analysis of intra-host viral diversity and quasispecies dynamics in relation to important biological questions on viral transmission, vaccine resistance and host jumping. HTS also enables the rapid identification of both known and potentially new viruses from field and clinical samples, thus adding new tools to the fields of viral discovery and metagenomics. Bioinformatics has been central to the rise of HTS applications because new algorithms and software tools are continually needed to process and analyse the large, complex datasets generated in this rapidly evolving area. In this paper, the authors give a brief overview of the main bioinformatics tools available for viral genomic research, with a particular emphasis on HTS technologies and their main applications. They summarise the major steps in various HTS analyses, starting with quality control of raw reads and encompassing activities ranging from consensus and de novo genome assembly to variant calling and metagenomics, as well as RNA sequencing

    Comparative functional genomics approach for the annotation of proteins in Unclassified Halophilic archaeon DL31

    Get PDF
    The structure, function and sub-cellular location prediction for the unknown proteins from Unclassified Halophilic archaeon DL31 were carried out for characterization of the proteins in their respective families. The 991 genes for hypothetical proteins in Halophilic archaeon DL31 chromosome were predicted by the application of computational methods and Bioinformatics web tools. The structure predictions for 206 unknown proteins were possible whereas functions were predicted in 825 protein sequences. The function prediction for the proteins were done by using Bioinformatics web tools like CDD-BLAST, INTERPROSCAN and PFAM by searching protein databases for the presence of conserved domains. The Sub-cellular location predictions were done for all the unknown proteins by using CELLO v 2.5 server. While tertiary structures were constructed using PS2 Server- Protein Structure Prediction server. This study revealed structural, functional and Sub-cellular localization of unknown proteins in Unclassified Halophilic archaeon DL31chromosome

    The Importance of Modularity in Bioinformatics Tools

    Get PDF
    In the last decade the amount of Bioinformatics tools has increased enormously. There are tools to store, analyse, visualize, edit or generate biological data and there are still more in development. Still, the demand for increased functionality in a single piece of software must be balanced by the need for modularity to keep the software maintainable. In complex systems, the conflicting demands of features and maintainability are often solved by plug-in systems.

For example Cytoscape, an open source platform for Complex-Network Analysis and Visualization, is using a plug-in system to allow the extension of the application without changing the core. This not only allows the integration of new functionality without a new release but offers the possibility for other developers to contribute plug-ins which are needed in their research.

Most tools have their own, individual plug-in system to meet the needs of the application. These are often very simple and easy to use. However, the increasing complexity of plug-ins demands more functionality of the plug-in system. We want to reuse components in different contexts, we want to have simple plug-in interfaces and we want to allow communication and dependencies between plug-ins. Many tools implemented in Java are facing these problems and there seems to be a common solution: the integration of an established modularity framework, like OSGi. To our knowledge, a number of developers of bioinformatics tools are already implementing, planning or thinking about the integration of OSGi into their applications, e.g. Cytoscape, Protege, PathVisio, ImageJ, Jalview or Chipster. The adoption of modularity frameworks in the development of bioinformatics applications is steadily increasing and should be considered in the design of new software.

By modularity in the traditional computer science sense, we mean the division of a software application into logical parts with separate concerns. To ease the development of software tools the application is separated into smaller logical parts, which are implemented individually. A set of modules can form a larger application but only if a proper glue is used, OSGi is an example of such a glue. OSGi allows to build an infrastructure into an application to add and use different modules. It provides mechanisms to allow the individual modules to rely on and interact with each other, opening the possibility to put together different modules to solve the problem at hand. Later, modules can be removed and new ones can be added to tackle another problem. As Katy Boerner in her article 'Plug-and-Play Macroscopes' writes, we should 'implement software frameworks that empower domain scientists to assemble their own continuously evolving macroscopes, adding and upgrading existing (and removing obsolete) plug-ins to arrive at a set that is truly relevant for their work'.

Some of these modules are going to be specific for one application but a lot of these modules can actually be reused by other tools. We are talking about general features like the import or export of different file formats, a layout algorithm that could be used by several visualization tools or the lookup in an external online database. Why should every tool implement its own parser or algorithm? Modularity can help to share functionality. There is no need to start from scratch and implement everything anew, thus developers can focus on new and important features.

Adding modularity, or better, a modularity framework to an existing software application is not a trivial task. The developers of Cytoscape are currently undertaking this challenge with the coming version 3. We are also working on the integration of OSGi into our pathway visualization tool PathVisio and we now want to share and compare our experiences, so others can benefit from our discoveries. This will not only help them in making a decision if OSGi is a suitable solution for them but also in the integration process itself
    corecore