1,262 research outputs found

    Visualizing genome and systems biology: technologies, tools, implementation techniques and trends, past, present and future.

    Get PDF
    "Α picture is worth a thousand words." This widely used adage sums up in a few words the notion that a successful visual representation of a concept should enable easy and rapid absorption of large amounts of information. Although, in general, the notion of capturing complex ideas using images is very appealing, would 1000 words be enough to describe the unknown in a research field such as the life sciences? Life sciences is one of the biggest generators of enormous datasets, mainly as a result of recent and rapid technological advances; their complexity can make these datasets incomprehensible without effective visualization methods. Here we discuss the past, present and future of genomic and systems biology visualization. We briefly comment on many visualization and analysis tools and the purposes that they serve. We focus on the latest libraries and programming languages that enable more effective, efficient and faster approaches for visualizing biological concepts, and also comment on the future human-computer interaction trends that would enable for enhancing visualization further

    BioIMAX : a Web2.0 approach to visual data mining in bioimage data

    Get PDF
    Loyek C. BioIMAX : a Web2.0 approach to visual data mining in bioimage data. Bielefeld: Universität Bielefeld; 2012

    TULIP 4

    Get PDF
    Tulip is an information visualization framework dedicated to the analysis and visualization of relational data. Based on more than 15 years of research and development, Tulip is built on a suite of tools and techniques , that can be used to address a large variety of domain-specific problems. With Tulip, we aim to provide Python and/or C++ developers a complete library, supporting the design of interactive information visualization applications for relational data, that can be customized to address a wide range of visualization problems. In its current iteration, Tulip enables the development of algorithms, visual encodings, interaction techniques, data models, and domain-specific visualizations. This development pipeline makes the framework efficient for creating research prototypes as well as developing end-user applications. The recent addition of a complete Python programming layer wraps up Tulip as an ideal tool for fast prototyping and treatment automation, allowing to focus on problem solving, and as a great system for teaching purposes at all education levels

    Integration and visualization of systems biology data in context of the genome

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>High-density tiling arrays and new sequencing technologies are generating rapidly increasing volumes of transcriptome and protein-DNA interaction data. Visualization and exploration of this data is critical to understanding the regulatory logic encoded in the genome by which the cell dynamically affects its physiology and interacts with its environment.</p> <p>Results</p> <p>The Gaggle Genome Browser is a cross-platform desktop program for interactively visualizing high-throughput data in the context of the genome. Important features include dynamic panning and zooming, keyword search and open interoperability through the Gaggle framework. Users may bookmark locations on the genome with descriptive annotations and share these bookmarks with other users. The program handles large sets of user-generated data using an in-process database and leverages the facilities of SQL and the R environment for importing and manipulating data.</p> <p>A key aspect of the Gaggle Genome Browser is interoperability. By connecting to the Gaggle framework, the genome browser joins a suite of interconnected bioinformatics tools for analysis and visualization with connectivity to major public repositories of sequences, interactions and pathways. To this flexible environment for exploring and combining data, the Gaggle Genome Browser adds the ability to visualize diverse types of data in relation to its coordinates on the genome.</p> <p>Conclusions</p> <p>Genomic coordinates function as a common key by which disparate biological data types can be related to one another. In the Gaggle Genome Browser, heterogeneous data are joined by their location on the genome to create information-rich visualizations yielding insight into genome organization, transcription and its regulation and, ultimately, a better understanding of the mechanisms that enable the cell to dynamically respond to its environment.</p

    Primer for Image Informatics in Personalized Medicine

    Get PDF
    AbstractImage informatics encompasses the concept of extracting and quantifying information contained in image data. Scenes, what an image contains, come from many imager devices such as consumer electronics, medical imaging systems, 3D laser scanners, microscopes, or satellites. There is a marked increase in image informatics applications as there have been simultaneous advances in imaging platforms, data availability due to social media, and big data analytics. An area ready to take advantage of these developments is personalized medicine, the concept where the goal is tailor healthcare to the individual. Patient health data is computationally profiled against a large of pool of feature-rich data from other patients to ideally optimize how a physician chooses care. One of the daunting challenges is how to effectively utilize medical image data in personalized medicine. Reliable data analytics products require as much automation as possible, which is a difficulty for data like histopathology and radiology images because we require highly trained expert physicians to interpret the information. This review targets biomedical scientists interested in getting started on tackling image analytics. We present high level discussions of sample preparation and image acquisition; data formats; storage and databases; image processing; computer vision and machine learning; and visualization and interactive programming. Examples will be covered using existing open-source software tools such as ImageJ, CellProfiler, and IPython Notebook. We discuss how difficult real-world challenges faced by image informatics and personalized medicine are being tackled with open-source biomedical data and software

    The Python interpreter as a framework for integrating scientific computing software-components

    Get PDF
    The focus of the Molecular Simulation Laboratory is to model molecularinteractions. In particular, we are working on automated docking and molecular visualization. Building and simulating complex molecular systems requires the tight interoperation of a variety of software tools originating from various scientific disciplines and usually developed independently of each other. Over the last ten years we have evolved a strategy for addressing the formidable software engineering problem ofintegrating such heterogeneous software tools. The basic idea is that the Python interpreter serves as the integration framework and provides a powerful and flexible glue for rapidly prototyping applications from reusable software components (i.e. Python packages). We no longer think in terms of programs, but rather in terms of packages which can be loaded dynamically into the interpreter when needed, and instantly extend our framework (i.e. the Python interpreter) with new functionality. We have written more than 30 packages (>2500 classes) providing support for applications ranging from scientific visualization and visual programming to molecular simulations and virtual reality. Moreover, some of our components have been reused successfully by otherlaboratories for their own research. Applications created from our software components have been distributed to over 15000 users around the world. In this paper we describe our approach and its various applications, discuss the reasons that make this approach so successful, and present lessons learns and pitfalls to avoid in order to maximize the reusability and interoperability of software components

    Modeling Faceted Browsing with Category Theory for Reuse and Interoperability

    Get PDF
    Faceted browsing (also called faceted search or faceted navigation) is an exploratory search model where facets assist in the interactive navigation of search results. Facets are attributes that have been assigned to describe resources being explored; a faceted taxonomy is a collection of facets provided by the interface and is often organized as sets, hierarchies, or graphs. Faceted browsing has become ubiquitous with modern digital libraries and online search engines, yet the process is still difficult to abstractly model in a manner that supports the development of interoperable and reusable interfaces. We propose category theory as a theoretical foundation for faceted browsing and demonstrate how the interactive process can be mathematically abstracted in order to support the development of reusable and interoperable faceted systems. Existing efforts in facet modeling are based upon set theory, formal concept analysis, and light-weight ontologies, but in many regards they are implementations of faceted browsing rather than a specification of the basic, underlying structures and interactions. We will demonstrate that category theory allows us to specify faceted objects and study the relationships and interactions within a faceted browsing system. Resulting implementations can then be constructed through a category-theoretic lens using these models, allowing abstract comparison and communication that naturally support interoperability and reuse. In this context, reuse and interoperability are at two levels: between discrete systems and within a single system. Our model works at both levels by leveraging category theory as a common language for representation and computation. We will establish facets and faceted taxonomies as categories and will demonstrate how the computational elements of category theory, including products, merges, pushouts, and pullbacks, extend the usefulness of our model. More specifically, we demonstrate that categorical constructions such as the pullback and pushout operations can help organize and reorganize facets; these operations in particular can produce faceted views containing relationships not found in the original source taxonomy. We show how our category-theoretic model of facets relates to database schemas and discuss how this relationship assists in implementing the abstractions presented. We give examples of interactive interfaces from the biomedical domain to help illustrate how our abstractions relate to real-world requirements while enabling systematic reuse and interoperability. We introduce DELVE (Document ExpLoration and Visualization Engine), our framework for developing interactive visualizations as modular Web-applications in order to assist researchers with exploratory literature search. We show how facets relate to and control visualizations; we give three examples of text visualizations that either contain or interact with facets. We show how each of these visualizations can be represented with our model and demonstrate how our model directly informs implementation. With our general framework for communicating consistently about facets at a high level of abstraction, we enable the construction of interoperable interfaces and enable the intelligent reuse of both existing and future efforts

    Integration and visualisation of clinical-omics datasets for medical knowledge discovery

    Get PDF
    In recent decades, the rise of various omics fields has flooded life sciences with unprecedented amounts of high-throughput data, which have transformed the way biomedical research is conducted. This trend will only intensify in the coming decades, as the cost of data acquisition will continue to decrease. Therefore, there is a pressing need to find novel ways to turn this ocean of raw data into waves of information and finally distil those into drops of translational medical knowledge. This is particularly challenging because of the incredible richness of these datasets, the humbling complexity of biological systems and the growing abundance of clinical metadata, which makes the integration of disparate data sources even more difficult. Data integration has proven to be a promising avenue for knowledge discovery in biomedical research. Multi-omics studies allow us to examine a biological problem through different lenses using more than one analytical platform. These studies not only present tremendous opportunities for the deep and systematic understanding of health and disease, but they also pose new statistical and computational challenges. The work presented in this thesis aims to alleviate this problem with a novel pipeline for omics data integration. Modern omics datasets are extremely feature rich and in multi-omics studies this complexity is compounded by a second or even third dataset. However, many of these features might be completely irrelevant to the studied biological problem or redundant in the context of others. Therefore, in this thesis, clinical metadata driven feature selection is proposed as a viable option for narrowing down the focus of analyses in biomedical research. Our visual cortex has been fine-tuned through millions of years to become an outstanding pattern recognition machine. To leverage this incredible resource of the human brain, we need to develop advanced visualisation software that enables researchers to explore these vast biological datasets through illuminating charts and interactivity. Accordingly, a substantial portion of this PhD was dedicated to implementing truly novel visualisation methods for multi-omics studies.Open Acces

    Visual analysis of anatomy ontologies and related genomic information

    Get PDF
    Challenges in scientific research include the difficulty in obtaining overviews of the large amount of data required for analysis, and in resolving the differences in terminology used to store and interpret information in multiple, independently created data sets. Ontologies provide one solution for analysis involving multiple data sources, improving cross-referencing and data integration. This thesis looks at harnessing advanced human perception to reduce the cognitive load in the analysis of the multiple, complex data sets the bioinformatics user group studied use in research, taking advantage also of users’ domain knowledge, to build mental models of data that map to its underlying structure. Guided by a user-centred approach, prototypes were developed to provide a visual method for exploring users’ information requirements and to identify solutions for these requirements. 2D and 3D node-link graphs were built to visualise the hierarchically structured ontology data, to improve analysis of individual and comparison of multiple data sets, by providing overviews of the data, followed by techniques for detailed analysis of regions of interest. Iterative, heuristic and structured user evaluations were used to assess and refine the options developed for the presentation and analysis of the ontology data. The evaluation results confirmed the advantages that visualisation provides over text-based analysis, and also highlighted the advantages of each of 2D and 3D for visual data analysis.Overseas Research Students Awards SchemeJames Watt Scholarshi
    corecore