174 research outputs found

    Microglial morphometric analysis: so many options, so little consistency

    Get PDF
    Quantification of microglial activation through morphometric analysis has long been a staple of the neuroimmunologist’s toolkit. Microglial morphological phenomics can be conducted through either manual classification or constructing a digital skeleton and extracting morphometric data from it. Multiple open-access and paid software packages are available to generate these skeletons via semi-automated and/or fully automated methods with varying degrees of accuracy. Despite advancements in methods to generate morphometrics (quantitative measures of cellular morphology), there has been limited development of tools to analyze the datasets they generate, in particular those containing parameters from tens of thousands of cells analyzed by fully automated pipelines. In this review, we compare and critique the approaches using cluster analysis and machine learning driven predictive algorithms that have been developed to tackle these large datasets, and propose improvements for these methods. In particular, we highlight the need for a commitment to open science from groups developing these classifiers. Furthermore, we call attention to a need for communication between those with a strong software engineering/computer science background and neuroimmunologists to produce effective analytical tools with simplified operability if we are to see their wide-spread adoption by the glia biology community

    Microglial morphometric analysis: so many options, so little consistency

    Get PDF
    Quantification of microglial activation through morphometric analysis has long been a staple of the neuroimmunologist’s toolkit. Microglial morphological phenomics can be conducted through either manual classification or constructing a digital skeleton and extracting morphometric data from it. Multiple open-access and paid software packages are available to generate these skeletons via semi-automated and/or fully automated methods with varying degrees of accuracy. Despite advancements in methods to generate morphometrics (quantitative measures of cellular morphology), there has been limited development of tools to analyze the datasets they generate, in particular those containing parameters from tens of thousands of cells analyzed by fully automated pipelines. In this review, we compare and critique the approaches using cluster analysis and machine learning driven predictive algorithms that have been developed to tackle these large datasets, and propose improvements for these methods. In particular, we highlight the need for a commitment to open science from groups developing these classifiers. Furthermore, we call attention to a need for communication between those with a strong software engineering/computer science background and neuroimmunologists to produce effective analytical tools with simplified operability if we are to see their wide-spread adoption by the glia biology community

    How worms move in 3D

    Get PDF
    Animals that live in the sky, underwater or underground display unique three dimensional behaviours made possible by their ability to generate movement in all directions. As animals explore their environment, they constantly adapt their locomotion strategies to balance factors such as distance travelled, speed, and energy expenditure. While exploration strategies have been widely studied across a variety of species, how animals explore 3D space remains an open problem. The nematode Caenorhabditis elegans presents an ideal candidate for the study of 3D exploration as it is naturally found in complex fluid and granular environments and is well sized (~1mm long) for the simultaneous capture of individual postures and long term trajectories using a fixed imaging setup. However, until recently C. elegans has been studied almost exclusively in planar environments and in 3D neither its modes of locomotion nor its exploration strategies are known. Here we present methods for reconstructing microscopic postures and tracking macroscopic trajectories from a large corpus of triaxial recordings of worms freely exploring complex gelatinous fluids. To account for the constantly changing optical properties of these gels we develop a novel differentiable renderer to construct images from 3D postures for direct comparison with the recorded images. The method is robust to interference such as air bubbles and dirt trapped in the gel, stays consistent through complex sequences of postures and recovers reliable estimates from low-resolution, blurry images. Using this approach we generate a large dataset of 3D exploratory trajectories (over 6 hours) and midline postures (over 4 hours). We find that C. elegans explore 3D space through the composition of quasi-planar regions separated by turns and variable-length runs. To achieve this, C. elegans use locomotion gaits and complex manoeuvres that differ from those previously observed on an agar surface. We show that the associated costs of locomotion increase with non-planarity and we develop a mathematical model to probe the implications of this connection. We find that quasi-planar strategies (such as we find in the data) yield the largest volumes explored as they provide a balance between 3D coverage and trajectory distance. Taken together, our results link locomotion primitives with exploration strategies in the context of short term volumetric foraging to provide a first integrated study into how worms move in 3D

    Proposal for Numerical Benchmarking of Fluid-Structure Interaction in Cerebral Aneurysms

    Full text link
    Computational fluid dynamics is intensively used to deepen the understanding of aneurysm growth and rupture in the attempt to support physicians during therapy planning. Numerous studies have assumed fully-rigid vessel walls in their simulations, whose sole hemodynamics may fail to provide a satisfactory criterion for rupture risk assessment. Moreover, direct in-vivo observations of intracranial aneurysm pulsation have been recently reported, encouraging the development of fluid-structure interaction for their modelling and for new assessments. In this work, we describe a new fluid-structure interaction benchmark setting for the careful evaluation of different aneurysm shapes. The studied configurations consist of three real aneurysm domes positioned on a toroidal channel. All geometric features, meshing characteristics, flow quantities, comparisons with a rigid-wall model and corresponding plots are provided. Reported results emphasize the alteration of flow patterns and hemodynamic descriptors when moving from the rigid-wall model to the complete fluid-structure interaction framework, thereby underlining the importance of the coupling between hemodynamics and the surrounding vessel tissue.Comment: 23 pages, 14 figure

    Automatic Rural Road Centerline Extraction from Aerial Images for a Forest Fire Support System

    Get PDF
    In the last decades, Portugal has been severely affected by forest fires which have caused massive damage both environmentally and socially. Having a well-structured and precise mapping of rural roads is critical to help firefighters to mitigate these events. The traditional process of extracting rural roads centerlines from aerial images is extremely time-consuming and tedious, because the mapping operator has to manually label the road area and extract the road centerline. A frequent challenge in the process of extracting rural roads centerlines is the high amount of environmental complexity and road occlusions caused by vehicles, shadows, wild vegetation, and trees, bringing heterogeneous segments that can be further improved. This dissertation proposes an approach to automatically detect rural road segments as well as extracting the road centerlines from aerial images. The proposed method focuses on two main steps: on the first step, an architecture based on a deep learning model (DeepLabV3+) is used, to extract the road features maps and detect the rural roads. On the second step, the first stage of the process is an optimization for improving road connections, as well as cleaning white small objects from the predicted image by the neural network. Finally, a morphological approach is proposed to extract the rural road centerlines from the previously detected roads by using thinning algorithms like the Zhang-Suen and Guo-Hall methods. With the automation of these two stages, it is now possible to detect and extract road centerlines from complex rural environments automatically and faster than the traditional ways, and possibly integrating that data in a Geographical Information System (GIS), allowing the creation of real-time mapping applications.Nas últimas décadas, Portugal tem sido severamente afetado por fogos florestais, que têm causado grandes estragos ambientais e sociais. Possuir um sistema de mapeamento de estradas rurais bem estruturado e preciso é essencial para ajudar os bombeiros a mitigar este tipo de eventos. Os processos tradicionais de extração de eixos de via em estradas rurais a partir de imagens aéreas são extremamente demorados e fastidiosos. Um desafio frequente na extração de eixos de via de estradas rurais é a alta complexidade dos ambientes rurais e de estes serem obstruídos por veículos, sombras, vegetação selvagem e árvores, trazendo segmentos heterogéneos que podem ser melhorados. Esta dissertação propõe uma abordagem para detetar automaticamente estradas rurais, bem como extrair os eixos de via de imagens aéreas. O método proposto concentra-se em duas etapas principais: na primeira etapa é utilizada uma arquitetura baseada em modelos de aprendizagem profunda (DeepLabV3+), para detetar as estradas rurais. Na segunda etapa, primeiramente é proposta uma otimização de intercessões melhorando as conexões relativas aos eixos de via, bem como a remoção de pequenos artefactos que estejam a introduzir ruído nas imagens previstas pela rede neuronal. E, por último, é utilizada uma abordagem morfológica para extrair os eixos de via das estradas previamente detetadas recorrendo a algoritmos de esqueletização tais como os algoritmos Zhang-Suen e Guo-Hall. Automatizando estas etapas, é então possível extrair eixos de via de ambientes rurais de grande complexidade de forma automática e com uma maior rapidez em relação aos métodos tradicionais, permitindo, eventualmente, integrar os dados num Sistema de Informação Geográfica (SIG), possibilitando a criação de aplicativos de mapeamento em tempo real

    3D segmentation and localization using visual cues in uncontrolled environments

    Get PDF
    3D scene understanding is an important area in robotics, autonomous vehicles, and virtual reality. The goal of scene understanding is to recognize and localize all the objects around the agent. This is done through semantic segmentation and depth estimation. Current approaches focus on improving the robustness to solve each task but fail in making them efficient for real-time usage. This thesis presents four efficient methods for scene understanding that work in real environments. The methods also aim to provide a solution for 2D and 3D data. The first approach presents a pipeline that combines the block matching algorithm for disparity estimation, an encoder-decoder neural network for semantic segmentation, and a refinement step that uses both outputs to complete the regions that were not labelled or did not have any disparity assigned to them. This method provides accurate results in 3D reconstruction and morphology estimation of complex structures like rose bushes. Due to the lack of datasets of rose bushes and their segmentation, we also made three large datasets. Two of them have real roses that were manually labelled, and the third one was created using a scene modeler and 3D rendering software. The last dataset aims to capture diversity, realism and obtain different types of labelling. The second contribution provides a strategy for real-time rose pruning using visual servoing of a robotic arm and our previous approach. Current methods obtain the structure of the plant and plan the cutting trajectory using only a global planner and assume a constant background. Our method works in real environments and uses visual feedback to refine the location of the cutting targets and modify the planned trajectory. The proposed visual servoing allows the robot to reach the cutting points 94% of the time. This is an improvement compared to only using a global planner without visual feedback, which reaches the targets 50% of the time. To the best of our knowledge, this is the first robot able to prune a complete rose bush in a natural environment. Recent deep learning image segmentation and disparity estimation networks provide accurate results. However, most of these methods are computationally expensive, which makes them impractical for real-time tasks. Our third contribution uses multi-task learning to learn the image segmentation and disparity estimation together end-to-end. The experiments show that our network has at most 1/3 of the parameters of the state-of-the-art of each individual task and still provides competitive results. The last contribution explores the area of scene understanding using 3D data. Recent approaches use point-based networks to do point cloud segmentation and find local relations between points using only the latent features provided by the network, omitting the geometric information from the point clouds. Our approach aggregates the geometric information into the network. Given that the geometric and latent features are different, our network also uses a two-headed attention mechanism to do local aggregation at the latent and geometric level. This additional information helps the network to obtain a more accurate semantic segmentation, in real point cloud data, using fewer parameters than current methods. Overall, the method obtains the state-of-the-art segmentation in the real datasets S3DIS with 69.2% and competitive results in the ModelNet40 and ShapeNetPart datasets

    PyPore3D: An Open Source Software Tool for Imaging Data Processing and Analysis of Porous and Multiphase Media

    Get PDF
    n this work, we propose the software library PyPore3D, an open source solution for data processing of large 3D/4D tomographic data sets. PyPore3D is based on the Pore3D core library, developed thanks to the collaboration between Elettra Sincrotrone (Trieste) and the University of Trieste (Italy). The Pore3D core library is built with a distinction between the User Interface and the backend filtering, segmentation, morphological processing, skeletonisation and analysis functions. The current Pore3D version relies on the closed source IDL framework to call the backend functions and enables simple scripting procedures for streamlined data processing. PyPore3D addresses this limitation by proposing a full open source solution which provides Python wrappers to the the Pore3D C library functions. The PyPore3D library allows the users to fully use the Pore3D Core Library as an open source solution under Python and Jupyter Notebooks PyPore3D is both getting rid of all the intrinsic limitations of licensed platforms (e.g., closed source and export restrictions) and adding, when needed, the flexibility of being able to integrate scientific libraries available for Python (SciPy, TensorFlow, etc.)

    32. Forum Bauinformatik 2021

    Get PDF
    Das Forum Bauinformatik ist eine jährlich stattfindende Tagung und ein wichtiger Bestandteil der Bauinformatik im deutschsprachigen Raum. Insbesondere Nachwuchswissenschaftlerinnen und -wissenschaftlern bietet es die Möglichkeit, ihre Forschungsarbeiten zu präsentieren, Problemstellungen fachspezifisch zu diskutieren und sich über den neuesten Stand der Forschung zu informieren. Es bietet sich ausgezeichnete Gelegenheit, in die wissenschaftliche Gemeinschaft im Bereich der Bauinformatik einzusteigen und Kontakte mit anderen Forschenden zu knüpfen

    clDice -- a Novel Topology-Preserving Loss Function for Tubular Structure Segmentation

    Full text link
    Accurate segmentation of tubular, network-like structures, such as vessels, neurons, or roads, is relevant to many fields of research. For such structures, the topology is their most important characteristic; particularly preserving connectedness: in the case of vascular networks, missing a connected vessel entirely alters the blood-flow dynamics. We introduce a novel similarity measure termed centerlineDice (short clDice), which is calculated on the intersection of the segmentation masks and their (morphological) skeleta. We theoretically prove that clDice guarantees topology preservation up to homotopy equivalence for binary 2D and 3D segmentation. Extending this, we propose a computationally efficient, differentiable loss function (soft-clDice) for training arbitrary neural segmentation networks. We benchmark the soft-clDice loss on five public datasets, including vessels, roads and neurons (2D and 3D). Training on soft-clDice leads to segmentation with more accurate connectivity information, higher graph similarity, and better volumetric scores.Comment: * The authors Suprosanna Shit and Johannes C. Paetzold contributed equally to the wor

    A fully automated pipeline for a robust conjunctival hyperemia estimation

    Get PDF
    Purpose: Many semi-automated and fully-automated approaches have been proposed in literature to improve the objectivity of the estimation of conjunctival hyperemia, based on image processing analysis of eyes’ photographs. The purpose is to improve its evaluation using faster fully-automated systems and independent by the human subjectivity. Methods: In this work, we introduce a fully-automated analysis of the redness grading scales able to completely automatize the clinical procedure, starting from the acquired image to the redness estimation. In particular, we introduce a neural network model for the conjunctival segmentation followed by an image processing pipeline for the vessels network segmentation. From these steps, we extract some features already known in literature and whose correlation with the conjunctival redness has already been proved. Lastly, we implemented a predictive model for the conjunctival hyperemia using these features. Results: In this work, we used a dataset of images acquired during clinical practice.We trained a neural network model for the conjunctival segmentation, obtaining an average accuracy of 0.94 and a corresponding IoU score of 0.88 on a test set of images. The set of features extracted on these ROIs is able to correctly predict the Efron scale values with a Spearman’s correlation coefficient of 0.701 on a set of not previously used samples. Conclusions: The robustness of our pipeline confirms its possible usage in a clinical practice as a viable decision support system for the ophthalmologists
    corecore