19 research outputs found

    A midas plugin to enable construction of reproducible web-based image processing pipelines

    Get PDF
    Image processing is an important quantitative technique for neuroscience researchers, but difficult for those who lack experience in the field. In this paper we present a web-based platform that allows an expert to create a brain image processing pipeline, enabling execution of that pipeline even by those biomedical researchers with limited image processing knowledge. These tools are implemented as a plugin for Midas, an open-source toolkit for creating web based scientific data storage and processing platforms. Using this plugin, an image processing expert can construct a pipeline, create a web-based User Interface, manage jobs, and visualize intermediate results. Pipelines are executed on a grid computing platform using BatchMake and HTCondor. This represents a new capability for biomedical researchers and offers an innovative platform for scientific collaboration. Current tools work well, but can be inaccessible for those lacking image processing expertise. Using this plugin, researchers in collaboration with image processing experts can create workflows with reasonable default settings and streamlined user interfaces, and data can be processed easily from a lab environment without the need for a powerful desktop computer. This platform allows simplified troubleshooting, centralized maintenance, and easy data sharing with collaborators. These capabilities enable reproducible science by sharing datasets and processing pipelines between collaborators. In this paper, we present a description of this innovative Midas plugin, along with results obtained from building and executing several ITK based image processing workflows for diffusion weighted MRI (DW MRI) of rodent brain images, as well as recommendations for building automated image processing pipelines. Although the particular image processing pipelines developed were focused on rodent brain MRI, the presented plugin can be used to support any executable or script-based pipeline

    A Modular and Open-Source Framework for Virtual Reality Visualisation and Interaction in Bioimaging

    Get PDF
    Life science today involves computational analysis of a large amount and variety of data, such as volumetric data acquired by state-of-the-art microscopes, or mesh data from analysis of such data or simulations. The advent of new imaging technologies, such as lightsheet microscopy, has resulted in the users being confronted with an ever-growing amount of data, with even terabytes of imaging data created within a day. With the possibility of gentler and more high-performance imaging, the spatiotemporal complexity of the model systems or processes of interest is increasing as well. Visualisation is often the first step in making sense of this data, and a crucial part of building and debugging analysis pipelines. It is therefore important that visualisations can be quickly prototyped, as well as developed or embedded into full applications. In order to better judge spatiotemporal relationships, immersive hardware, such as Virtual or Augmented Reality (VR/AR) headsets and associated controllers are becoming invaluable tools. In this work we present scenery, a modular and extensible visualisation framework for the Java VM that can handle mesh and large volumetric data, containing multiple views, timepoints, and color channels. scenery is free and open-source software, works on all major platforms, and uses the Vulkan or OpenGL rendering APIs. We introduce scenery's main features, and discuss its use with VR/AR hardware and in distributed rendering. In addition to the visualisation framework, we present a series of case studies, where scenery can provide tangible benefit in developmental and systems biology: With Bionic Tracking, we demonstrate a new technique for tracking cells in 4D volumetric datasets via tracking eye gaze in a virtual reality headset, with the potential to speed up manual tracking tasks by an order of magnitude. We further introduce ideas to move towards virtual reality-based laser ablation and perform a user study in order to gain insight into performance, acceptance and issues when performing ablation tasks with virtual reality hardware in fast developing specimen. To tame the amount of data originating from state-of-the-art volumetric microscopes, we present ideas how to render the highly-efficient Adaptive Particle Representation, and finally, we present sciview, an ImageJ2/Fiji plugin making the features of scenery available to a wider audience.:Abstract Foreword and Acknowledgements Overview and Contributions Part 1 - Introduction 1 Fluorescence Microscopy 2 Introduction to Visual Processing 3 A Short Introduction to Cross Reality 4 Eye Tracking and Gaze-based Interaction Part 2 - VR and AR for System Biology 5 scenery — VR/AR for Systems Biology 6 Rendering 7 Input Handling and Integration of External Hardware 8 Distributed Rendering 9 Miscellaneous Subsystems 10 Future Development Directions Part III - Case Studies C A S E S T U D I E S 11 Bionic Tracking: Using Eye Tracking for Cell Tracking 12 Towards Interactive Virtual Reality Laser Ablation 13 Rendering the Adaptive Particle Representation 14 sciview — Integrating scenery into ImageJ2 & Fiji Part IV - Conclusion 15 Conclusions and Outlook Backmatter & Appendices A Questionnaire for VR Ablation User Study B Full Correlations in VR Ablation Questionnaire C Questionnaire for Bionic Tracking User Study List of Tables List of Figures Bibliography SelbststĂ€ndigkeitserklĂ€run

    Architectures for ubiquitous 3D on heterogeneous computing platforms

    Get PDF
    Today, a wide scope for 3D graphics applications exists, including domains such as scientific visualization, 3D-enabled web pages, and entertainment. At the same time, the devices and platforms that run and display the applications are more heterogeneous than ever. Display environments range from mobile devices to desktop systems and ultimately to distributed displays that facilitate collaborative interaction. While the capability of the client devices may vary considerably, the visualization experiences running on them should be consistent. The field of application should dictate how and on what devices users access the application, not the technical requirements to realize the 3D output. The goal of this thesis is to examine the diverse challenges involved in providing consistent and scalable visualization experiences to heterogeneous computing platforms and display setups. While we could not address the myriad of possible use cases, we developed a comprehensive set of rendering architectures in the major domains of scientific and medical visualization, web-based 3D applications, and movie virtual production. To provide the required service quality, performance, and scalability for different client devices and displays, our architectures focus on the efficient utilization and combination of the available client, server, and network resources. We present innovative solutions that incorporate methods for hybrid and distributed rendering as well as means to manage data sets and stream rendering results. We establish the browser as a promising platform for accessible and portable visualization services. We collaborated with experts from the medical field and the movie industry to evaluate the usability of our technology in real-world scenarios. The presented architectures achieve a wide coverage of display and rendering setups and at the same time share major components and concepts. Thus, they build a strong foundation for a unified system that supports a variety of use cases.Heutzutage existiert ein großer Anwendungsbereich fĂŒr 3D-Grafikapplikationen wie wissenschaftliche Visualisierungen, 3D-Inhalte in Webseiten, und Unterhaltungssoftware. Gleichzeitig sind die GerĂ€te und Plattformen, welche die Anwendungen ausfĂŒhren und anzeigen, heterogener als je zuvor. AnzeigegerĂ€te reichen von mobilen GerĂ€ten zu Desktop-Systemen bis hin zu verteilten Bildschirmumgebungen, die eine kollaborative Anwendung begĂŒnstigen. WĂ€hrend die LeistungsfĂ€higkeit der GerĂ€te stark schwanken kann, sollten die dort laufenden Visualisierungen konsistent sein. Das Anwendungsfeld sollte bestimmen, wie und auf welchem GerĂ€t Benutzer auf die Anwendung zugreifen, nicht die technischen Voraussetzungen zur Erzeugung der 3D-Grafik. Das Ziel dieser Thesis ist es, die diversen Herausforderungen zu untersuchen, die bei der Bereitstellung von konsistenten und skalierbaren Visualisierungsanwendungen auf heterogenen Plattformen eine Rolle spielen. WĂ€hrend wir nicht die Vielzahl an möglichen AnwendungsfĂ€llen abdecken konnten, haben wir eine reprĂ€sentative Auswahl an Rendering-Architekturen in den Kernbereichen wissenschaftliche Visualisierung, web-basierte 3D-Anwendungen, und virtuelle Filmproduktion entwickelt. Um die geforderte QualitĂ€t, Leistung, und Skalierbarkeit fĂŒr verschiedene Client-GerĂ€te und -Anzeigen zu gewĂ€hrleisten, fokussieren sich unsere Architekturen auf die effiziente Nutzung und Kombination der verfĂŒgbaren Client-, Server-, und Netzwerkressourcen. Wir prĂ€sentieren innovative Lösungen, die hybrides und verteiltes Rendering als auch das Verwalten der DatensĂ€tze und Streaming der 3D-Ausgabe umfassen. Wir etablieren den Web-Browser als vielversprechende Plattform fĂŒr zugĂ€ngliche und portierbare Visualisierungsdienste. Um die Verwendbarkeit unserer Technologie in realitĂ€tsnahen Szenarien zu testen, haben wir mit Experten aus der Medizin und Filmindustrie zusammengearbeitet. Unsere Architekturen erreichen eine umfassende Abdeckung von Anzeige- und Rendering-Szenarien und teilen sich gleichzeitig wesentliche Komponenten und Konzepte. Sie bilden daher eine starke Grundlage fĂŒr ein einheitliches System, das eine Vielzahl an AnwendungsfĂ€llen unterstĂŒtzt

    The impact of arterial input function determination variations on prostate dynamic contrast-enhanced magnetic resonance imaging pharmacokinetic modeling: a multicenter data analysis challenge, part II

    Get PDF
    This multicenter study evaluated the effect of variations in arterial input function (AIF) determination on pharmacokinetic (PK) analysis of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data using the shutter-speed model (SSM). Data acquired from eleven prostate cancer patients were shared among nine centers. Each center used a site-specific method to measure the individual AIF from each data set and submitted the results to the managing center. These AIFs, their reference tissue-adjusted variants, and a literature population-averaged AIF, were used by the managing center to perform SSM PK analysis to estimate Ktrans (volume transfer rate constant), ve (extravascular, extracellular volume fraction), kep (efflux rate constant), and τi (mean intracellular water lifetime). All other variables, including the definition of the tumor region of interest and precontrast T1 values, were kept the same to evaluate parameter variations caused by variations in only the AIF. Considerable PK parameter variations were observed with within-subject coefficient of variation (wCV) values of 0.58, 0.27, 0.42, and 0.24 for Ktrans, ve, kep, and τi, respectively, using the unadjusted AIFs. Use of the reference tissue-adjusted AIFs reduced variations in Ktrans and ve (wCV = 0.50 and 0.10, respectively), but had smaller effects on kep and τi (wCV = 0.39 and 0.22, respectively). kep is less sensitive to AIF variation than Ktrans, suggesting it may be a more robust imaging biomarker of prostate microvasculature. With low sensitivity to AIF uncertainty, the SSM-unique τi parameter may have advantages over the conventional PK parameters in a longitudinal study

    LIPIcs, Volume 277, GIScience 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 277, GIScience 2023, Complete Volum

    Structural and Functional Studies of Polyketide Synthases

    Get PDF
    Polyketides, natural products produced by multi-domain polyketide synthases (PKSs), have proven to be excellent starting points for drug discovery. Rational engineering of PKSs holds much promise for the generation of novel polyketide pharmaceuticals, however to enable this we need to gain a better understanding of how the mature polyketides are generated and how individual modules within a polyketide synthase assemble and interact. Here, work was performed to investigate three polypeptides from natural product indanomycin and rhizoxin biosynthesis, including the candidate polyketide cyclase IdmH, the fourth subunit of the indanomycin megasynthase, IdmO, and the branching module from the rhizoxin PKS. Indanomycin needs to undergo several transformations by post-PKS tailoring enzymes. One such enzyme, IdmH, has been hypothesised to act as a cyclase and catalyse the formation of the indane ring via a Diels-Alder reaction. Crystal structure of the wild-type IdmH was determined to 2.7 Å resolution and the interactions between IdmH and its proposed product indanomycin were characterised using NMR spectroscopy and in silico methods. Fully-reducing IdmO module was successfully expressed and purified. Characterisation by negative-stain electron microscopy resulted in a low-resolution model of IdmO, while attempts to carry out cryo-electron microscopy (cryo-EM) analysis revealed a number of difficulties associated with the denaturation of this large complex during cryo-EM grid preparation. A similar cryo-EM approach was utilised to study the branching module from the rhizoxin PKS. A 3.7 Å resolution map was determined for this module containing the ketosynthase, branching and acyl carrier protein (ACP) domains. Two ACP binding sites were identified, which can help explain the unorthodox activity of this module. This research has provided valuable insights into different aspects of PKS biology ranging from polyketide tailoring and branching to the assembly of the intact modules and forms a solid basis for future studies of these fascinating biosynthetic machines

    12th International Conference on Geographic Information Science: GIScience 2023, September 12–15, 2023, Leeds, UK

    Get PDF
    No abstract available

    Automated morphometric analysis and phenotyping of mouse brains from structural ”MR images

    Get PDF
    In light of the utility and increasing ubiquity of mouse models of genetic and neurological disease, I describefully automated pipelines for the investigation of structural microscopic magnetic resonance images of mouse brains – for both high-throughput phenotyping, and monitoring disease. Mouse models offer unparalleled insight into genetic function and brain plasticity, in phenotyping studies; and neurodegenerative disease onset and progression, in therapeutic trials. I developed two cohesive, automatic software tools, for Voxel- and Tensor-Based Morphometry (V/TBM) and the Boundary Shift Integral (BSI), in the mouse brain. V/TBM are advantageous for their ability to highlight morphological differences between groups, without laboriously delineating regions of interest. The BSI is a powerful and sensitive imaging biomarker for the detection of atrophy. The resulting pipelines are described in detail. I show the translation and application of open-source software developed for clinical MRI analysis to mouse brain data: for tissue segmentation into high-quality, subject-specific maps, using contemporary multi-atlas techniques; and for symmetric, inverse-consistent registration. I describe atlases and parameters suitable for the preclinical paradigm, and illustrate and discuss image processing challenges encountered and overcome during development. As proof of principle and to illustrate robustness, I used both pipelines with in and ex vivo mouse brain datasets to identify differences between groups, representing the morphological influence of genes, and subtle, longitudinal changes over time, in particular relation to Down syndrome and Alzheimer’s disease. I also discuss the merits of transitioning preclinical analysis from predominately ex vivo MRI to in vivo, where morphometry is still viable and fewer mice are necessary. This thesis conveys the cross-disciplinary translation of up-to-date image analysis techniques to the preclinical paradigm; the development of novel methods and adaptations to robustly process large cohorts of data; and the sensitive detection of phenotypic differences and neurodegenerative changes in the mouse brai
    corecore