227 research outputs found

    Active Learning Pipeline for Brain Mapping in a High Performance Computing Environment

    Full text link
    This paper describes a scalable active learning pipeline prototype for large-scale brain mapping that leverages high performance computing power. It enables high-throughput evaluation of algorithm results, which, after human review, are used for iterative machine learning model training. Image processing and machine learning are performed in a batch layer. Benchmark testing of image processing using pMATLAB shows that a 100Ă—\times increase in throughput (10,000%) can be achieved while total processing time only increases by 9% on Xeon-G6 CPUs and by 22% on Xeon-E5 CPUs, indicating robust scalability. The images and algorithm results are provided through a serving layer to a browser-based user interface for interactive review. This pipeline has the potential to greatly reduce the manual annotation burden and improve the overall performance of machine learning-based brain mapping.Comment: 6 pages, 5 figures, submitted to IEEE HPEC 2020 proceeding

    Methods for Automated Neuron Image Analysis

    Get PDF
    Knowledge of neuronal cell morphology is essential for performing specialized analyses in the endeavor to understand neuron behavior and unravel the underlying principles of brain function. Neurons can be captured with a high level of detail using modern microscopes, but many neuroscientific studies require a more explicit and accessible representation than offered by the resulting images, underscoring the need for digital reconstruction of neuronal morphology from the images into a tree-like graph structure. This thesis proposes new computational methods for automated detection and reconstruction of neurons from fluorescence microscopy images. Specifically, the successive chapters describe and evaluate original solutions to problems such as the detection of landmarks (critical points) of the neuronal tree, complete tracing and reconstruction of the tree, and the detection of regions containing neurons in high-content screens

    Doctor of Philosophy

    Get PDF
    dissertationNeuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. However, the extremely anisotropic resolution of the data makes segmentation and tracking across slices difficult. Furthermore, the thickness of the slices can make the membranes of the neurons hard to identify. Similarly, structures can change significantly from one section to the next due to slice thickness which makes tracking difficult. This thesis presents a complete method for segmenting many neurons at once in two-dimensional (2D) electron microscopy images and reconstructing and visualizing them in three-dimensions (3D). First, we present an advanced method for identifying neuron membranes in 2D, necessary for whole neuron segmentation, using a machine learning approach. The method described uses a series of artificial neural networks (ANNs) in a framework combined with a feature vector that is composed of image and context; intensities sampled over a stencil neighborhood. Several ANNs are applied in series allowing each ANN to use the classification context; provided by the previous network to improve detection accuracy. To improve the membrane detection, we use information from a nonlinear alignment of sequential learned membrane images in a final ANN that improves membrane detection in each section. The final output, the detected membranes, are used to obtain 2D segmentations of all the neurons in an image. We also present a method that constructs 3D neuron representations by formulating the problem of finding paths through sets of sections as an optimal path computation, which applies a cost function to the identification of a cell from one section to the next and solves this optimization problem using Dijkstras algorithm. This basic formulation accounts for variability or inconsistencies between sections and prioritizes cells based on the evidence of their connectivity. Finally, we present a tool that combines these techniques with a visual user interface that enables users to quickly segment whole neurons in large volumes

    Automated Strategies in Multimodal and Multidimensional Ultrasound Image-based Diagnosis

    Get PDF
    Medical ultrasonography is an effective technique in traditional anatomical and functional diagnosis. However, it requires the visual examination by experienced clinicians, which is a laborious, time consuming and highly subjective procedure. Computer-aided diagnosis (CADx) have been extensively used in clinical practice to support the interpretation of images; nevertheless, current ultrasound CADx still entails a substantial user-dependency and are unable to extract image data for prediction modelling. The aim of this thesis is to propose a set of fully automated strategies to overcome the limitations of ultrasound CADx. These strategies are addressed to multiple modalities (B-Mode, Contrast-Enhanced Ultrasound-CEUS, Power Doppler-PDUS and Acoustic Angiography-AA) and dimensions (2-D and 3-D imaging). The enabling techniques presented in this work are designed, developed and quantitively validated to efficiently improve the overall patients’ diagnosis. This work is subdivided in 2 macro-sections: in the first part, two fully automated algorithms for the reliable quantification of 2-D B-Mode ultrasound skeletal muscle architecture and morphology are proposed. In the second part, two fully automated algorithms for the objective assessment and characterization of tumors’ vasculature in 3-D CEUS and PDUS thyroid tumors and preclinical AA cancer growth are presented. In the first part, the MUSA (Muscle UltraSound Analysis) algorithm is designed to measure the muscle thickness, the fascicles length and the pennation angle; the TRAMA (TRAnsversal Muscle Analysis) algorithm is proposed to extract and analyze the Visible Cross-Sectional Area (VCSA). MUSA and TRAMA algorithms have been validated on two datasets of 200 images; automatic measurements have been compared with expert operators’ manual measurements. A preliminary statistical analysis was performed to prove the ability of texture analysis on automatic VCSA in the distinction between healthy and pathological muscles. In the second part, quantitative assessment on tumor vasculature is proposed in two automated algorithms for the objective characterization of 3-D CEUS/Power Doppler thyroid nodules and the evolution study of fibrosarcoma invasion in preclinical 3-D AA imaging. Vasculature analysis relies on the quantification of architecture and vessels tortuosity. Vascular features obtained from CEUS and PDUS images of 20 thyroid nodules (10 benign, 10 malignant) have been used in a multivariate statistical analysis supported by histopathological results. Vasculature parametric maps of implanted fibrosarcoma are extracted from 8 rats investigated with 3-D AA along four time points (TPs), in control and tumors areas; results have been compared with manual previous findings in a longitudinal tumor growth study. Performance of MUSA and TRAMA algorithms results in 100% segmentation success rate. Absolute difference between manual and automatic measurements is below 2% for the muscle thickness and 4% for the VCSA (values between 5-10% are acceptable in clinical practice), suggesting that automatic and manual measurements can be used interchangeably. The texture features extraction on the automatic VCSAs reveals that texture descriptors can distinguish healthy from pathological muscles with a 100% success rate for all the four muscles. Vascular features extracted of 20 thyroid nodules in 3-D CEUS and PDUS volumes can be used to distinguish benign from malignant tumors with 100% success rate for both ultrasound techniques. Malignant tumors present higher values of architecture and tortuosity descriptors; 3-D CEUS and PDUS imaging present the same accuracy in the differentiation between benign and malignant nodules. Vascular parametric maps extracted from the 8 rats along the 4 TPs in 3-D AA imaging show that parameters extracted from the control area are statistically different compared to the ones within the tumor volume. Tumor angiogenetic vessels present a smaller diameter and higher tortuosity. Tumor evolution is characterized by the significant vascular trees growth and a constant value of vessel diameter along the four TPs, confirming the previous findings. In conclusion, the proposed automated strategies are highly performant in segmentation, features extraction, muscle disease detection and tumor vascular characterization. These techniques can be extended in the investigation of other organs, diseases and embedded in ultrasound CADx, providing a user-independent reliable diagnosis

    Generalizable automated pixel-level structural segmentation of medical and biological data

    Get PDF
    Over the years, the rapid expansion in imaging techniques and equipments has driven the demand for more automation in handling large medical and biological data sets. A wealth of approaches have been suggested as optimal solutions for their respective imaging types. These solutions span various image resolutions, modalities and contrast (staining) mechanisms. Few approaches generalise well across multiple image types, contrasts or resolution. This thesis proposes an automated pixel-level framework that addresses 2D, 2D+t and 3D structural segmentation in a more generalizable manner, yet has enough adaptability to address a number of specific image modalities, spanning retinal funduscopy, sequential fluorescein angiography and two-photon microscopy. The pixel-level segmentation scheme involves: i ) constructing a phase-invariant orientation field of the local spatial neighbourhood; ii ) combining local feature maps with intensity-based measures in a structural patch context; iii ) using a complex supervised learning process to interpret the combination of all the elements in the patch in order to reach a classification decision. This has the advantage of transferability from retinal blood vessels in 2D to neural structures in 3D. To process the temporal components in non-standard 2D+t retinal angiography sequences, we first introduce a co-registration procedure: at the pairwise level, we combine projective RANSAC with a quadratic homography transformation to map the coordinate systems between any two frames. At the joint level, we construct a hierarchical approach in order for each individual frame to be registered to the global reference intra- and inter- sequence(s). We then take a non-training approach that searches in both the spatial neighbourhood of each pixel and the filter output across varying scales to locate and link microvascular centrelines to (sub-) pixel accuracy. In essence, this \link while extract" piece-wise segmentation approach combines the local phase-invariant orientation field information with additional local phase estimates to obtain a soft classification of the centreline (sub-) pixel locations. Unlike retinal segmentation problems where vasculature is the main focus, 3D neural segmentation requires additional exibility, allowing a variety of structures of anatomical importance yet with different geometric properties to be differentiated both from the background and against other structures. Notably, cellular structures, such as Purkinje cells, neural dendrites and interneurons, all display certain elongation along their medial axes, yet each class has a characteristic shape captured by an orientation field that distinguishes it from other structures. To take this into consideration, we introduce a 5D orientation mapping to capture these orientation properties. This mapping is incorporated into the local feature map description prior to a learning machine. Extensive performance evaluations and validation of each of the techniques presented in this thesis is carried out. For retinal fundus images, we compute Receiver Operating Characteristic (ROC) curves on existing public databases (DRIVE & STARE) to assess and compare our algorithms with other benchmark methods. For 2D+t retinal angiography sequences, we compute the error metrics ("Centreline Error") of our scheme with other benchmark methods. For microscopic cortical data stacks, we present segmentation results on both surrogate data with known ground-truth and experimental rat cerebellar cortex two-photon microscopic tissue stacks.Open Acces

    RAPID 3D TRACING OF THE MOUSE BRAIN NEUROVASCULATURE WITH LOCAL MAXIMUM INTENSITY PROJECTION AND MOVING WINDOWS

    Get PDF
    Neurovascular models have played an important role in understanding neuronal function or medical conditions. In the past few decades, only small volumes of neurovascular data have been available. However, huge data sets are becoming available with high throughput instruments like the Knife-Edge Scanning Microscope (KESM). Therefore, fast and robust tracing methods become necessary for tracing such large data sets. However, most tracing methods are not effective in handling complex structures such as branches. Some methods can solve this issue, but they are not computationally efficient (i.e., slow). Motivated by the issue of speed and robustness, I introduce an effective and efficient fiber tracing algorithm for 2D and 3D data. In 2D tracing, I have implemented a Moving Window (MW) method which leads to a mathematical simplification and noise robustness in determining the trace direction. Moreover, it provides enhanced handling of branch points. During tracing, a Cubic Tangential Trace Spline (CTTS) is used as an accurate and fast nonlinear interpolation approach. For 3D tracing, I have designed a method based on local maximum intensity projection (MIP). MIP can utilize any existing 2D tracing algorithms for use in 3D tracing. It can also significantly reduce the search space. However, most neurovascular data are too complex to directly use MIP on a large scale. Therefore, we use MIP within a limited cube to get unambiguous projections, and repeat the MIP-based approach over the entire data set. For processing large amounts of data, we have to automate the tracing algorithms. Since the automated algorithms may not be 100 percent correct, validation is needed. I validated my approach by comparing the traced results to human labeled ground truth showing that the result of my approach is very similar to the ground truth. However, this validation is limited to small-scale real-world data due to the limitation of the manual labeling. Therefore, for large-scale data, I validated my approach using a model-based generator. The result suggests that my approach can also be used for large-scale real-world data. The main contributions of this research are as follows. My 2D tracing algorithm is fast enough to analyze, with linear processing time based on fiber length, large volumes of biological data and is good at handling branches. The new local MIP approach for 3D tracing provides significant performance improvement and it allows the reuse of any existing 2D tracing methods. The model-based generator enables tracing algorithms to be validated for large-scale real-world data. My approach is widely applicable for rapid and accurate tracing of large amounts of biomedical data

    DEVELOPMENT OF A CEREBELLAR MEAN FIELD MODEL: THE THEORETICAL FRAMEWORK, THE IMPLEMENTATION AND THE FIRST APPLICATION

    Get PDF
    Brain modeling constantly evolves to improve the accuracy of the simulated brain dynamics with the ambitious aim to build a digital twin of the brain. Specific models tuned on brain regions specific features empower the brain simulations introducing bottom-up physiology properties into data-driven simulators. Despite the cerebellum contains 80 % of the neurons and is deeply involved in a wide range of functions, from sensorimotor to cognitive ones, a specific cerebellar model is still missing. Furthermore, its quasi-crystalline multi-layer circuitry deeply differs from the cerebral cortical one, therefore is hard to imagine a unique general model suitable for the realistic simulation of both cerebellar and cerebral cortex. The present thesis tackles the challenge of developing a specific model for the cerebellum. Specifically, multi-neuron multi-layer mean field (MF) model of the cerebellar network, including Granule Cells, Golgi Cells, Molecular Layer Interneurons, and Purkinje Cells, was implemented, and validated against experimental data and the corresponding spiking neural network microcircuit model. The cerebellar MF model was built using a system of interdependent equations, where the single neuronal populations and topological parameters were captured by neuron-specific inter- dependent Transfer Functions. The model time resolution was optimized using Local Field Potentials recorded experimentally with high-density multielectrode array from acute mouse cerebellar slices. The present MF model satisfactorily captured the average discharge of different microcircuit neuronal populations in response to various input patterns and was able to predict the changes in Purkinje Cells firing patterns occurring in specific behavioral conditions: cortical plasticity mapping, which drives learning in associative tasks, and Molecular Layer Interneurons feed-forward inhibition, which controls Purkinje Cells activity patterns. The cerebellar multi-layer MF model thus provides a computationally efficient tool that will allow to investigate the causal relationship between microscopic neuronal properties and ensemble brain activity in health and pathological conditions. Furthermore, preliminary attempts to simulate a pathological cerebellum were done in the perspective of introducing our multi-layer cerebellar MF model in whole-brain simulators to realize patient-specific treatments, moving ahead towards personalized medicine. Two preliminary works assessed the relevant impact of the cerebellum on whole-brain dynamics and its role in modulating complex responses in causal connected cerebral regions, confirming that a specific model is required to further investigate the cerebellum-on- cerebrum influence. The framework presented in this thesis allows to develop a multi-layer MF model depicting the features of a specific brain region (e.g., cerebellum, basal ganglia), in order to define a general strategy to build up a pool of biology grounded MF models for computationally feasible simulations. Interconnected bottom-up MF models integrated in large-scale simulators would capture specific features of different brain regions, while the applications of a virtual brain would have a substantial impact on the reality ranging from the characterization of neurobiological processes, subject-specific preoperative plans, and development of neuro-prosthetic devices

    Fluorescence microscopy image analysis of retinal neurons using deep learning

    Get PDF
    An essential goal of neuroscience is to understand the brain by simultaneously identifying, measuring, and analyzing activity from individual cells within a neural population in live brain tissue. Analyzing fluorescence microscopy (FM) images in real-time with computational algorithms is essential for achieving this goal. Deep learning techniques have shown promise in this area, but face domain-specific challenges due to limited training data, significant amounts of voxel noise in FM images, and thin structures present in large 3D images. In this thesis, I address these issues by introducing a novel deep learning pipeline to analyze static FM images of neurons with minimal data requirements and demonstrate the pipeline’s ability to segment neurons from low signal-to-noise ratio FM images with few training samples. The first step of this pipeline employs a Generative Adversarial Network (GAN) equipped to learn imaging properties from a small set of static FM images acquired for a given neuroscientific experiment. Operating like an actual microscope, our fully-trained GAN can then generate realistic static FM images from volumetric reconstructions of neurons with added control over the intensity and noise of the generated images. For the second step in our pipeline, a novel segmentation network is trained on GAN-generated images with reconstructed neurons serving as “gold standard” ground truths. While training on a large dataset of FM images is optimal, a 15\% improvement in neuron segmentation accuracy from noisy FM images is shown when architectures are fine-tuned only on a small subsample of real image data. To evaluate the overall feasibility of our pipeline and the utility of generated images, 2 novel synthetic and 3 newly acquired FM image datasets are introduced along with a new evaluation protocol for FM image ”realness” that incorporates content, texture, and expert opinion metrics. While this pipeline's primary application is to segment neurons from highly noisy FM images, its utility can be extended to automate other FM tasks such as synapse identification, neuron classification, or super-resolution
    • …
    corecore