61 research outputs found

    A Hitchhiker's guide through the bio-image analysis software universe

    Get PDF
    Modern research in the life sciences is unthinkable without computational methods for extracting, quantifying and visualising information derived from microscopy imaging data of biological samples. In the past decade, we observed a dramatic increase in available software packages for these purposes. As it is increasingly difficult to keep track of the number of available image analysis platforms, tool collections, components and emerging technologies, we provide a conservative overview of software that we use in daily routine and give insights into emerging new tools. We give guidance on which aspects to consider when choosing the platform that best suits the user's needs, including aspects such as image data type, skills of the team, infrastructure and community at the institute and availability of time and budget.Peer reviewe

    Towards Smarter Fluorescence Microscopy: Enabling Adaptive Acquisition Strategies With Optimized Photon Budget

    Get PDF
    Fluorescence microscopy is an invaluable technique for studying the intricate process of organism development. The acquisition process, however, is associated with the fundamental trade-off between the quality and reliability of the acquired data. On one hand, the goal of capturing the development in its entirety, often times across multiple spatial and temporal scales, requires extended acquisition periods. On the other hand, high doses of light required for such experiments are harmful for living samples and can introduce non-physiological artifacts in the normal course of development. Conventionally, a single set of acquisition parameters is chosen in the beginning of the acquisition and constitutes the experimenter’s best guess of the overall optimal configuration within the aforementioned trade-off. In the paradigm of adaptive microscopy, in turn, one aims at achieving more efficient photon budget distribution by dynamically adjusting the acquisition parameters to the changing properties of the sample. In this thesis, I explore the principles of adaptive microscopy and propose a range of improvements for two real imaging scenarios. Chapter 2 summarizes the design and implementation of an adaptive pipeline for efficient observation of the asymmetrically dividing neurogenic progenitors in Zebrafish retina. In the described approach the fast and expensive acquisition mode is automatically activated only when the mitotic cells are present in the field of view. The method illustrates the benefits of the adaptive acquisition in the common scenario of the individual events of interest being sparsely distributed throughout the duration of the acquisition. Chapter 3 focuses on computational aspects of segmentation-based adaptive schemes for efficient acquisition of the developing Drosophila pupal wing. Fast sample segmentation is shown to provide a valuable output for the accurate evaluation of the sample morphology and dynamics in real time. This knowledge proves instrumental for adjusting the acquisition parameters to the current properties of the sample and reducing the required photon budget with minimal effects to the quality of the acquired data. Chapter 4 addresses the generation of synthetic training data for learning-based methods in bioimage analysis, making them more practical and accessible for smart microscopy pipelines. State-of-the-art deep learning models trained exclusively on the generated synthetic data are shown to yield powerful predictions when applied to the real microscopy images. In the end, in-depth evaluation of the segmentation quality of both real and synthetic data-based models illustrates the important practical aspects of the approach and outlines the directions for further research

    Volumetric Segmentation of Cell Cycle Markers in Confocal Images Using Machine Learning and Deep Learning

    Get PDF
    © Copyright © 2020 Khan, Voß, Pound and French. Understanding plant growth processes is important for many aspects of biology and food security. Automating the observations of plant development—a process referred to as plant phenotyping—is increasingly important in the plant sciences, and is often a bottleneck. Automated tools are required to analyze the data in microscopy images depicting plant growth, either locating or counting regions of cellular features in images. In this paper, we present to the plant community an introduction to and exploration of two machine learning approaches to address the problem of marker localization in confocal microscopy. First, a comparative study is conducted on the classification accuracy of common conventional machine learning algorithms, as a means to highlight challenges with these methods. Second, a 3D (volumetric) deep learning approach is developed and presented, including consideration of appropriate loss functions and training data. A qualitative and quantitative analysis of all the results produced is performed. Evaluation of all approaches is performed on an unseen time-series sequence comprising several individual 3D volumes, capturing plant growth. The comparative analysis shows that the deep learning approach produces more accurate and robust results than traditional machine learning. To accompany the paper, we are releasing the 4D point annotation tool used to generate the annotations, in the form of a plugin for the popular ImageJ (FIJI) software. Network models and example datasets will also be available online

    Tools for interfacing, extracting, and analyzing neural signals using wide-field fluorescence imaging and optogenetics in awake behaving mice

    Get PDF
    Imaging of multiple cells has rapidly multiplied the rate of data acquisition as well as our knowledge of the complex dynamics within the mammalian brain. The process of data acquisition has been dramatically enhanced with highly affordable, sensitive image sensors enable high-throughput detection of neural activity in intact animals. Genetically encoded calcium sensors deliver a substantial boost in signal strength and in combination with equally critical advances in the size, speed, and sensitivity of image sensors available in scientific cameras enables high-throughput detection of neural activity in behaving animals using traditional wide-field fluorescence microscopy. However, the tremendous increase in data flow presents challenges to processing, analysis, and storage of captured video, and prompts a reexamination of traditional routines used to process data in neuroscience and now demand improvements in both our hardware and software applications for processing, analyzing, and storing captured video. This project demonstrates the ease with which a dependable and affordable wide-field fluorescence imaging system can be assembled and integrated with behavior control and monitoring system such as found in a typical neuroscience laboratory. An Open-source MATLAB toolbox is employed to efficiently analyze and visualize large imaging data sets in a manner that is both interactive and fully automated. This software package provides a library of image pre-processing routines optimized for batch-processing of continuous functional fluorescence video, and additionally automates a fast unsupervised ROI detection and signal extraction routine. Further, an extension of this toolbox that uses GPU programming to process streaming video, enabling the identification, segmentation and extraction of neural activity signals on-line is described in which specific algorithms improve signal specificity and image quality at the single cell level in a behaving animal. This project describes the strategic ingredients for transforming a large bulk flow of raw continuous video into proportionally informative images and knowledge
    • 

    corecore