1,062 research outputs found

    Doctor of Philosophy

    Get PDF
    dissertationConfocal microscopy has become a popular imaging technique in biology research in recent years. It is often used to study three-dimensional (3D) structures of biological samples. Confocal data are commonly multichannel, with each channel resulting from a different fluorescent staining. This technique also results in finely detailed structures in 3D, such as neuron fibers. Despite the plethora of volume rendering techniques that have been available for many years, there is a demand from biologists for a flexible tool that allows interactive visualization and analysis of multichannel confocal data. Together with biologists, we have designed and developed FluoRender. It incorporates volume rendering techniques such as a two-dimensional (2D) transfer function and multichannel intermixing. Rendering results can be enhanced through tone-mappings and overlays. To facilitate analyses of confocal data, FluoRender provides interactive operations for extracting complex structures. Furthermore, we developed the Synthetic Brainbow technique, which takes advantage of the asynchronous behavior in Graphics Processing Unit (GPU) framebuffer loops and generates random colorizations for different structures in single-channel confocal data. The results from our Synthetic Brainbows, when applied to a sequence of developing cells, can then be used for tracking the movements of these cells. Finally, we present an application of FluoRender in the workflow of constructing anatomical atlases

    Segmentation of striatal brain structures from high resolution pet images

    Get PDF
    Dissertation presented at the Faculty of Science and Technology of the New University of Lisbon in fulfillment of the requirements for the Masters degree in Electrical Engineering and ComputersWe propose and evaluate fully automatic segmentation methods for the extraction of striatal brain surfaces (caudate, putamen, ventral striatum and white matter), from high resolution positron emission tomography (PET) images. In the preprocessing steps, both the right and the left striata were segmented from the high resolution PET images. This segmentation was achieved by delineating the brain surface, finding the plane that maximizes the reflective symmetry of the brain (mid-sagittal plane) and, finally, extracting the right and left striata from both hemisphere images. The delineation of the brain surface and the extraction of the striata were achieved using the DSM-OS (Surface Minimization – Outer Surface) algorithm. The segmentation of striatal brain surfaces from the striatal images can be separated into two sub-processes: the construction of a graph (named “voxel affinity matrix”) and the graph clustering. The voxel affinity matrix was built using a set of image features that accurately informs the clustering method on the relationship between image voxels. The features defining the similarity of pairwise voxels were spatial connectivity, intensity values, and Euclidean distances. The clustering process is treated as a graph partition problem using two methods, a spectral (multiway normalized cuts) and a non-spectral (weighted kernel k-means). The normalized cuts algorithm relies on the computation of the graph eigenvalues to partition the graph into connected regions. However, this method fails when applied to high resolution PET images due to the high computational requirements arising from the image size. On the other hand, the weighted kernel k-means classifies iteratively, with the aid of the image features, a given data set into a predefined number of clusters. The weighted kernel k-means and the normalized cuts algorithm are mathematically similar. After finding the optimal initial parameters for the weighted kernel k-means for this type of images, no further tuning is necessary for subsequent images. Our results showed that the putamen and ventral striatum were accurately segmented, while the caudate and white matter appeared to be merged in the same cluster. The putamen was divided in anterior and posterior areas. All the experiments resulted in the same type of segmentation, validating the reproducibility of our results

    3D Reconstruction of Neural Circuits from Serial EM Images

    Get PDF
    A basic requirement for reconstructing and understanding complete circuit diagrams of neuronal processing units is the availability of electron microscopic 3D data sets of large ensembles of neurons. A recently developed technique, "Serial Block Face Scanning Electron Microscopy" (SBFSEM, Denk and Horstmann 2004) allows automatic sectioning and imaging of biological tissue inside the vacuum chamber of a scanning electron microscope. Image stacks generated with this technology have a resolution sucient to distinguish different cellular compartments, including synaptic structures. Such an image stack contains thousands of images and is recorded with a voxel size of 23 nm in the x- and y-directions and 30 nm in the z-direction. Consequently a tissue block of 1 mm3 produces 63 terabytes of data. Therefore new concepts for managing large data sets and automated image processing are required. I developed an image segmentation and 3D reconstruction software, which allows precise contour tracing of cell membranes and simultaneously displays the resulting 3D structure. The software contains two stand-alone packages: Neuron2D and Neuron3D, both oering an easy-to-operate graphical user interface (GUI). The software package Neuron2D provides the following image processing functions: • Image Registration: Combination of multiple SBFSEM image tiles. • Image Preprocessing: Filtering of image stacks. Implemented are Gaussian and Non-Linear-Diusion lters in 2D and 3D. This step enhances the contrast between contour lines and image background, leading to a higher signal-to-noise ratio, thus further improving detection of membrane borders. • Image Segmentation: The implemented algorithms extract contour lines from the preceding image and automatically trace the contour lines in the following images (z-direction), taking into account the previous image segmentation. They also permit image segmentation starting at any position in the image stack. In addition, manual interaction is possible. To visualize 3D structures of neuronal circuits the additional software Neuron3D was developed. The program relies on the contour line information provided by Neuron2D to implement a surface reconstruction algorithm based on dynamic time warping. Additional rendering techniques, such as shading and texture mapping, are provided. The detailed anatomical reconstruction provides a framework for computational models of neuronal circuits. For example in ies, where moving retinal images lead to appropriate course control signals, the circuit reconstruction of motion-sensitive neurons can help to further understand the neural processing of visual motion in ies

    Automated Strategies in Multimodal and Multidimensional Ultrasound Image-based Diagnosis

    Get PDF
    Medical ultrasonography is an effective technique in traditional anatomical and functional diagnosis. However, it requires the visual examination by experienced clinicians, which is a laborious, time consuming and highly subjective procedure. Computer-aided diagnosis (CADx) have been extensively used in clinical practice to support the interpretation of images; nevertheless, current ultrasound CADx still entails a substantial user-dependency and are unable to extract image data for prediction modelling. The aim of this thesis is to propose a set of fully automated strategies to overcome the limitations of ultrasound CADx. These strategies are addressed to multiple modalities (B-Mode, Contrast-Enhanced Ultrasound-CEUS, Power Doppler-PDUS and Acoustic Angiography-AA) and dimensions (2-D and 3-D imaging). The enabling techniques presented in this work are designed, developed and quantitively validated to efficiently improve the overall patients’ diagnosis. This work is subdivided in 2 macro-sections: in the first part, two fully automated algorithms for the reliable quantification of 2-D B-Mode ultrasound skeletal muscle architecture and morphology are proposed. In the second part, two fully automated algorithms for the objective assessment and characterization of tumors’ vasculature in 3-D CEUS and PDUS thyroid tumors and preclinical AA cancer growth are presented. In the first part, the MUSA (Muscle UltraSound Analysis) algorithm is designed to measure the muscle thickness, the fascicles length and the pennation angle; the TRAMA (TRAnsversal Muscle Analysis) algorithm is proposed to extract and analyze the Visible Cross-Sectional Area (VCSA). MUSA and TRAMA algorithms have been validated on two datasets of 200 images; automatic measurements have been compared with expert operators’ manual measurements. A preliminary statistical analysis was performed to prove the ability of texture analysis on automatic VCSA in the distinction between healthy and pathological muscles. In the second part, quantitative assessment on tumor vasculature is proposed in two automated algorithms for the objective characterization of 3-D CEUS/Power Doppler thyroid nodules and the evolution study of fibrosarcoma invasion in preclinical 3-D AA imaging. Vasculature analysis relies on the quantification of architecture and vessels tortuosity. Vascular features obtained from CEUS and PDUS images of 20 thyroid nodules (10 benign, 10 malignant) have been used in a multivariate statistical analysis supported by histopathological results. Vasculature parametric maps of implanted fibrosarcoma are extracted from 8 rats investigated with 3-D AA along four time points (TPs), in control and tumors areas; results have been compared with manual previous findings in a longitudinal tumor growth study. Performance of MUSA and TRAMA algorithms results in 100% segmentation success rate. Absolute difference between manual and automatic measurements is below 2% for the muscle thickness and 4% for the VCSA (values between 5-10% are acceptable in clinical practice), suggesting that automatic and manual measurements can be used interchangeably. The texture features extraction on the automatic VCSAs reveals that texture descriptors can distinguish healthy from pathological muscles with a 100% success rate for all the four muscles. Vascular features extracted of 20 thyroid nodules in 3-D CEUS and PDUS volumes can be used to distinguish benign from malignant tumors with 100% success rate for both ultrasound techniques. Malignant tumors present higher values of architecture and tortuosity descriptors; 3-D CEUS and PDUS imaging present the same accuracy in the differentiation between benign and malignant nodules. Vascular parametric maps extracted from the 8 rats along the 4 TPs in 3-D AA imaging show that parameters extracted from the control area are statistically different compared to the ones within the tumor volume. Tumor angiogenetic vessels present a smaller diameter and higher tortuosity. Tumor evolution is characterized by the significant vascular trees growth and a constant value of vessel diameter along the four TPs, confirming the previous findings. In conclusion, the proposed automated strategies are highly performant in segmentation, features extraction, muscle disease detection and tumor vascular characterization. These techniques can be extended in the investigation of other organs, diseases and embedded in ultrasound CADx, providing a user-independent reliable diagnosis

    Applied Visualization in the Neurosciences and the Enhancement of Visualization through Computer Graphics

    Get PDF
    The complexity and size of measured and simulated data in many fields of science is increasing constantly. The technical evolution allows for capturing smaller features and more complex structures in the data. To make this data accessible by the scientists, efficient and specialized visualization techniques are required. Maximum efficiency and value for the user can only be achieved by adapting visualization to the specific application area and the specific requirements of the scientific field. Part I: In the first part of my work, I address the visualization in the neurosciences. The neuroscience tries to understand the human brain; beginning at its smallest parts, up to its global infrastructure. To achieve this ambitious goal, the neuroscience uses a combination of three-dimensional data from a myriad of sources, like MRI, CT, or functional MRI. To handle this diversity of different data types and sources, the neuroscience need specialized and well evaluated visualization techniques. As a start, I will introduce an extensive software called \"OpenWalnut\". It forms the common base for developing and using visualization techniques with our neuroscientific collaborators. Using OpenWalnut, standard and novel visualization approaches are available to the neuroscientific researchers too. Afterwards, I am introducing a very specialized method to illustrate the causal relation of brain areas, which was, prior to that, only representable via abstract graph models. I will finalize the first part of my work with an evaluation of several standard visualization techniques in the context of simulated electrical fields in the brain. The goal of this evaluation was clarify the advantages and disadvantages of the used visualization techniques to the neuroscientific community. We exemplified these, using clinically relevant scenarios. Part II: Besides the data preprocessing, which plays a tremendous role in visualization, the final graphical representation of the data is essential to understand structure and features in the data. The graphical representation of data can be seen as the interface between the data and the human mind. The second part of my work is focused on the improvement of structural and spatial perception of visualization -- the improvement of the interface. Unfortunately, visual improvements using computer graphics methods of the computer game industry is often seen sceptically. In the second part, I will show that such methods can be applied to existing visualization techniques to improve spatiality and to emphasize structural details in the data. I will use a computer graphics paradigm called \"screen space rendering\". Its advantage, amongst others, is its seamless applicability to nearly every visualization technique. I will start with two methods that improve the perception of mesh-like structures on arbitrary surfaces. Those mesh structures represent second-order tensors and are generated by a method named \"TensorMesh\". Afterwards I show a novel approach to optimally shade line and point data renderings. With this technique it is possible for the first time to emphasize local details and global, spatial relations in dense line and point data.In vielen Bereichen der Wissenschaft nimmt die Größe und Komplexität von gemessenen und simulierten Daten zu. Die technische Entwicklung erlaubt das Erfassen immer kleinerer Strukturen und komplexerer Sachverhalte. Um solche Daten dem Menschen zugänglich zu machen, benötigt man effiziente und spezialisierte Visualisierungswerkzeuge. Nur die Anpassung der Visualisierung auf ein Anwendungsgebiet und dessen Anforderungen erlaubt maximale Effizienz und Nutzen für den Anwender. Teil I: Im ersten Teil meiner Arbeit befasse ich mich mit der Visualisierung im Bereich der Neurowissenschaften. Ihr Ziel ist es, das menschliche Gehirn zu begreifen; von seinen kleinsten Teilen bis hin zu seiner Gesamtstruktur. Um dieses ehrgeizige Ziel zu erreichen nutzt die Neurowissenschaft vor allem kombinierte, dreidimensionale Daten aus vielzähligen Quellen, wie MRT, CT oder funktionalem MRT. Um mit dieser Vielfalt umgehen zu können, benötigt man in der Neurowissenschaft vor allem spezialisierte und evaluierte Visualisierungsmethoden. Zunächst stelle ich ein umfangreiches Softwareprojekt namens \"OpenWalnut\" vor. Es bildet die gemeinsame Basis für die Entwicklung und Nutzung von Visualisierungstechniken mit unseren neurowissenschaftlichen Kollaborationspartnern. Auf dieser Basis sind klassische und neu entwickelte Visualisierungen auch für Neurowissenschaftler zugänglich. Anschließend stelle ich ein spezialisiertes Visualisierungsverfahren vor, welches es ermöglicht, den kausalen Zusammenhang zwischen Gehirnarealen zu illustrieren. Das war vorher nur durch abstrakte Graphenmodelle möglich. Den ersten Teil der Arbeit schließe ich mit einer Evaluation verschiedener Standardmethoden unter dem Blickwinkel simulierter elektrischer Felder im Gehirn ab. Das Ziel dieser Evaluation war es, der neurowissenschaftlichen Gemeinde die Vor- und Nachteile bestimmter Techniken zu verdeutlichen und anhand klinisch relevanter Fälle zu erläutern. Teil II: Neben der eigentlichen Datenvorverarbeitung, welche in der Visualisierung eine enorme Rolle spielt, ist die grafische Darstellung essenziell für das Verständnis der Strukturen und Bestandteile in den Daten. Die grafische Repräsentation von Daten bildet die Schnittstelle zum Gehirn des Menschen. Der zweite Teile meiner Arbeit befasst sich mit der Verbesserung der strukturellen und räumlichen Wahrnehmung in Visualisierungsverfahren -- mit der Verbesserung der Schnittstelle. Leider werden viele visuelle Verbesserungen durch Computergrafikmethoden der Spieleindustrie mit Argwohn beäugt. Im zweiten Teil meiner Arbeit werde ich zeigen, dass solche Methoden in der Visualisierung angewendet werden können um den räumlichen Eindruck zu verbessern und Strukturen in den Daten hervorzuheben. Dazu nutze ich ein in der Computergrafik bekanntes Paradigma: das \"Screen Space Rendering\". Dieses Paradigma hat den Vorteil, dass es auf nahezu jede existierende Visualiserungsmethode als Nachbearbeitunsgschritt angewendet werden kann. Zunächst führe ich zwei Methoden ein, die die Wahrnehmung von gitterartigen Strukturen auf beliebigen Oberflächen verbessern. Diese Gitter repräsentieren die Struktur von Tensoren zweiter Ordnung und wurden durch eine Methode namens \"TensorMesh\" erzeugt. Anschließend zeige ich eine neuartige Technik für die optimale Schattierung von Linien und Punktdaten. Mit dieser Technik ist es erstmals möglich sowohl lokale Details als auch globale räumliche Zusammenhänge in dichten Linien- und Punktdaten zu erfassen

    Computer Vision Techniques for Transcatheter Intervention

    Get PDF
    Minimally invasive transcatheter technologies have demonstrated substantial promise for the diagnosis and treatment of cardiovascular diseases. For example, TAVI is an alternative to AVR for the treatment of severe aortic stenosis and TAFA is widely used for the treatment and cure of atrial fibrillation. In addition, catheter-based IVUS and OCT imaging of coronary arteries provides important information about the coronary lumen, wall and plaque characteristics. Qualitative and quantitative analysis of these cross-sectional image data will be beneficial for the evaluation and treatment of coronary artery diseases such as atherosclerosis. In all the phases (preoperative, intraoperative, and postoperative) during the transcatheter intervention procedure, computer vision techniques (e.g., image segmentation, motion tracking) have been largely applied in the field to accomplish tasks like annulus measurement, valve selection, catheter placement control, and vessel centerline extraction. This provides beneficial guidance for the clinicians in surgical planning, disease diagnosis, and treatment assessment. In this paper, we present a systematical review on these state-of-the-art methods.We aim to give a comprehensive overview for researchers in the area of computer vision on the subject of transcatheter intervention. Research in medical computing is multi-disciplinary due to its nature, and hence it is important to understand the application domain, clinical background, and imaging modality so that methods and quantitative measurements derived from analyzing the imaging data are appropriate and meaningful. We thus provide an overview on background information of transcatheter intervention procedures, as well as a review of the computer vision techniques and methodologies applied in this area

    Automated Segmentation of Cerebral Aneurysm Using a Novel Statistical Multiresolution Approach

    Get PDF
    Cerebral Aneurysm (CA) is a vascular disease that threatens the lives of many adults. It a ects almost 1:5 - 5% of the general population. Sub- Arachnoid Hemorrhage (SAH), resulted by a ruptured CA, has high rates of morbidity and mortality. Therefore, radiologists aim to detect it and diagnose it at an early stage, by analyzing the medical images, to prevent or reduce its damages. The analysis process is traditionally done manually. However, with the emerging of the technology, Computer-Aided Diagnosis (CAD) algorithms are adopted in the clinics to overcome the traditional process disadvantages, as the dependency of the radiologist's experience, the inter and intra observation variability, the increase in the probability of error which increases consequently with the growing number of medical images to be analyzed, and the artifacts added by the medical images' acquisition methods (i.e., MRA, CTA, PET, RA, etc.) which impedes the radiologist' s work. Due to the aforementioned reasons, many research works propose di erent segmentation approaches to automate the analysis process of detecting a CA using complementary segmentation techniques; but due to the challenging task of developing a robust reproducible reliable algorithm to detect CA regardless of its shape, size, and location from a variety of the acquisition methods, a diversity of proposed and developed approaches exist which still su er from some limitations. This thesis aims to contribute in this research area by adopting two promising techniques based on the multiresolution and statistical approaches in the Two-Dimensional (2D) domain. The rst technique is the Contourlet Transform (CT), which empowers the segmentation by extracting features not apparent in the normal image scale. While the second technique is the Hidden Markov Random Field model with Expectation Maximization (HMRF-EM), which segments the image based on the relationship of the neighboring pixels in the contourlet domain. The developed algorithm reveals promising results on the four tested Three- Dimensional Rotational Angiography (3D RA) datasets, where an objective and a subjective evaluation are carried out. For the objective evaluation, six performance metrics are adopted which are: accuracy, Dice Similarity Index (DSI), False Positive Ratio (FPR), False Negative Ratio (FNR), speci city, and sensitivity. As for the subjective evaluation, one expert and four observers with some medical background are involved to assess the segmentation visually. Both evaluations compare the segmented volumes against the ground truth data

    DCT Implementation on GPU

    Get PDF
    There has been a great progress in the field of graphics processors. Since, there is no rise in the speed of the normal CPU processors; Designers are coming up with multi-core, parallel processors. Because of their popularity in parallel processing, GPUs are becoming more and more attractive for many applications. With the increasing demand in utilizing GPUs, there is a great need to develop operating systems that handle the GPU to full capacity. GPUs offer a very efficient environment for many image processing applications. This thesis explores the processing power of GPUs for digital image compression using Discrete cosine transform
    corecore