3,527 research outputs found

    Gebiss: an ImageJ plugin for the specification of ground truth and the performance evaluation of 3D segmentation algorithms.

    Get PDF
    Background: Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods. Results: We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation. Conclusions: We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss

    Smoke plume segmentation of wildfire images

    Get PDF
    Aquest treball s'emmarca dins del camp d'estudi de les xarxes neuronals en Aprenentatge profund. L'objectiu del projecte és analitzar i aplicar les xarxes neuronals que hi ha avui dia en el mercat per resoldre un problema en específic. Aquest és tracta de la segmentació de plomalls de fum en incendis forestals. S'ha desenvolupat un estudi de les xarxes neuronals utilitzades per resoldre problemes de segmentació d'imatges i també una reconstrucció posterior en 3D d'aquests plomalls de fum. L'algorisme finalment escollit és tracta del model UNet, una xarxa neuronal convolucional basada en l'estructura d'autoencoders amb connexions de pas, que desenvolupa tasques d'autoaprenentatge per finalment obtenir una predicció de la classe a segmentar entrenada, en aquest cas plomalls. de fum. Posteriorment, una comparativa entre algoritmes tradicionals i el model UNet aplicat fent servir aprenentatge profund s'ha realitzat, veient que tant quantitativament com qualitativament s'aconsegueix els millors resultats aplicant el model UNet, però a la vegada comporta més temps de computació. Tots aquests models s'han desenvolupat amb el llenguatge de programació Python utilitzant els llibres d'aprenentatge automàtic Tensorflow i Keras. Dins del model UNet s'han dut a terme múltiples experiments per obtenir els diferents valors dels hiperparàmetres més adequats per a l'aplicació del projecte, obtenint una precisió del 93.45 % en el model final per a la segmentació de fum en imatges d'incendis. forestals.Este trabajo se enmarca dentro del campo de estudio de las redes neuronales en aprendizaje profundo. El objetivo del proyecto es analizar y aplicar las redes neuronales que existen hoy en día en el mercado para resolver un problema en específico. Éste se trata de la segmentación de penachos de humo en incendios forestales. Se ha desarrollado un estudio de las redes neuronales utilizadas para resolver problemas de segmentación de imágenes y también una reconstrucción posterior en 3D de estos penachos de humo. El algoritmo finalmente escogido se trata del modelo UNet, una red neuronal convolucional basada en la estructura de autoencoders con conexiones de paso, que desarrolla tareas de autoaprendizaje para finalmente obtener una predicción de la clase a segmentar entrenada, en este caso penachos de humo. Posteriormente, una comparativa entre algoritmos tradicionales y el modelo UNet aplicado utilizando aprendizaje profundo se ha realizado, viendo que tanto cuantitativa como cualitativamente se consigue los mejores resultados aplicando el modelo UNet, pero a la vez conlleva más tiempo de computación. Todos estos modelos se han desarrollado con el lenguaje de programación Python utilizando libros de aprendizaje automático Tensorflow y Keras. Dentro del modelo UNet se han llevado a cabo múltiples experimentos para obtener los distintos valores de los hiperparámetros más adecuados para la aplicación del proyecto, obteniendo una precisión del 93.45 % en el modelo final para la segmentación de humo en imágenes de incendios forestales.This work is framed within the field of study of neural networks in Deep Learning. The aim of the project is to analyse and apply the neural networks that exist today in the market to solve a specific problem. This is about the segmentation of smoke plumes in forest fires. A study of the neural networks used to solve image segmentation problems and also a subsequent 3D reconstruction of these smoke plumes has been developed. The algorithm finally chosen is the UNet model, a convolutional neural network based on the structure of autoencoders with step connections, which develops self-learning tasks to finally obtain a prediction of the class to be trained, in this case smoke plumes. Also, a comparison between traditional algorithms and the UNet model applied using deep learning has been carried out, seeing that both quantitatively and qualitatively the best results are achieved by applying the UNet model, but at the same time it involves more computing time. All these models have been developed in the Python programming language using the Tensorflow and Keras machine learning books. Within the UNet model, multiple experiments have been carried out to obtain the different hyperparameter values most suitable for the project application, obtaining an accuracy of 93.45% in the final model for smoke segmentation in wildfire images

    Modeling and visualization of medical anesthesiology acts

    Get PDF
    Dissertação para obtenção do Grau de Mestre em Engenharia InformáticaIn recent years, medical visualization has evolved from simple 2D images on a light board to 3D computarized images. This move enabled doctors to find better ways of planning surgery and to diagnose patients. Although there is a great variety of 3D medical imaging software, it falls short when dealing with anesthesiology acts. Very little anaesthesia related work has been done. As a consequence, doctors and medical students have had little support to study the subject of anesthesia in the human body. We all are aware of how costly can be setting medical experiments, covering not just medical aspects but ethical and financial ones as well. With this work we hope to contribute for having better medical visualization tools in the area of anesthesiology. Doctors and in particular medical students should study anesthesiology acts more efficiently. They should be able to identify better locations to administrate the anesthesia, to study how long does it take for the anesthesia to affect patients, to relate the effect on patients with quantity of anaesthesia provided, etc. In this work, we present a medical visualization prototype with three main functionalities: image pre-processing, segmentation and rendering. The image pre-processing is mainly used to remove noise from images, which were obtained via imaging scanners. In the segmentation stage it is possible to identify relevant anatomical structures using proper segmentation algorithms. As a proof of concept, we focus our attention in the lumbosacral region of the human body, with data acquired via MRI scanners. The segmentation we provide relies mostly in two algorithms: region growing and level sets. The outcome of the segmentation implies the creation of a 3D model of the anatomical structure under analysis. As for the rendering, the 3D models are visualized using the marching cubes algorithm. The software we have developed also supports time-dependent data. Hence, we could represent the anesthesia flowing in the human body. Unfortunately, we were not able to obtain such type of data for testing. But we have used human lung data to validate this functionality

    Automated Distinct Bone Segmentation from Computed Tomography Images using Deep Learning

    Get PDF
    Large-scale CT scans are frequently performed for forensic and diagnostic purposes, to plan and direct surgical procedures, and to track the development of bone-related diseases. This often involves radiologists who have to annotate bones manually or in a semi-automatic way, which is a time consuming task. Their annotation workload can be reduced by automated segmentation and detection of individual bones. This automation of distinct bone segmentation not only has the potential to accelerate current workflows but also opens up new possibilities for processing and presenting medical data for planning, navigation, and education. In this thesis, we explored the use of deep learning for automating the segmentation of all individual bones within an upper-body CT scan. To do so, we had to find a network architec- ture that provides a good trade-off between the problem’s high computational demands and the results’ accuracy. After finding a baseline method and having enlarged the dataset, we set out to eliminate the most prevalent types of error. To do so, we introduced an novel method called binary-prediction-enhanced multi-class (BEM) inference, separating the task into two: Distin- guishing bone from non-bone is conducted separately from identifying the individual bones. Both predictions are then merged, which leads to superior results. Another type of error is tack- led by our developed architecture, the Sneaky-Net, which receives additional inputs with larger fields of view but at a smaller resolution. We can thus sneak more extensive areas of the input into the network while keeping the growth of additional pixels in check. Overall, we present a deep-learning-based method that reliably segments most of the over one hundred distinct bones present in upper-body CT scans in an end-to-end trained matter quickly enough to be used in interactive software. Our algorithm has been included in our groups virtual reality medical image visualisation software SpectoVR with the plan to be used as one of the puzzle piece in surgical planning and navigation, as well as in the education of future doctors
    corecore