105 research outputs found

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Micro- and nanoanatomy of human brain tissues

    Get PDF
    “Better to see something once than to hear about it a thousand times.” Proverb The human brain is one of the most complex organs in the body, containing billions of neurons of hundreds of types. To understand its properties and functionality at the most fundamental level, one must reveal and describe its structure down to the(sub-)cellular level. In general, three-dimensional (3D) characterisation of physically soft tissues is a challenge. Thus, the possibility of performing non-destructive label-free 3D imaging with the reasonable sensitivity, resolution and increased manageable specimen sizes, especially within the laboratory environment, is of great interest. The focus of the thesis relies on the non-destructive 3D investigation of the micro and nanoanatomy of human brain tissues. The ambitious challenge faced was to bridge the performance gap between the tomography data from laboratory systems, histological approaches employed by anatomists and pathologists, and synchrotron radiation-based tomography, by taking advantage of recent developments in X-ray tomographic imaging. The main reached milestones of the project include (i) visualisation of individual Purkinje cells in a label-free manner by laboratory-based absorption-contrast micro computed tomography (LBμCT), (ii) incorporation of the double-grating interferometer into the nanotom® m (GE Sensing & Inspection Technologies GmbH, Wunstorf, Germany) for phase-contrast imaging and (iii) visualisation and quantification of sub-cellular structures using nano-holotomography (nano-imaging beamline ID16A-NI, European ynchrotron Radiation Facility (ESRF), Grenoble, France). Hard X-ray micro computed tomography (μCT) in the absorption-contrast mode is well-established for hard tissue visualisation. However, performance in relation to lower density materials, such as post mortem brain tissues, is questionable, as attenuation differences between anatomical features are weak. It was demonstrated, through the example of a formalin-fixed paraffin-embedded (FFPE) human cerebellum, that absorption-contrast laboratory-based micro computed tomography can provide premium contrast images, complementary to hematoxylin and eosin (H&E) stained histological sections. Based on our knowledge, the detection of individual Purkinje cells without a dedicated contrast agent is unique in the field of absorption-contrast laboratory-based micro computed tomography. As the intensity of H&E staining of histological sections and the attenuation contrast of LBμCT data demonstrated a correlation, pseudo colouring of tomography data according to the H&E stain can be performed, virtually extending two-dimensional (2D) histology into the third dimension. The LBμCT of FFPE samples can be understood as a time-efficient and reliable tissue visualisation methodology, and so it could become a method of choice for imaging of relatively large specimens within the laboratory environment. Comparing the data acquired at the LBμCT system nanotom® m and synchrotron radiation facilities (Diamond-Manchester Imaging Branchline I13-2, Diamond Light Source, Didcot, UK and Microtomography beamline ID19, ESRF), it was demonstrated that all selected modalities, namely LBμCT, synchrotron radiation-based in line phase-contrast tomography using single-distance phase reconstruction and synchrotron radiation-based grating interferometry, can reach cellular resolution. As phase contrast yields better data quality for soft tissues, and in order to overcome the restrictions of limited beamtime access for phase-contrast measurements,a commercially available advanced μCT system nanotom® m was equipped with an X-ray double-grating interferometer (XDGI). The successful performance of the interferometer in the tomography mode was demonstrated on a human knee joint sample. XDGI provided enough contrast (1.094 ± 0.152) and spatial resolution (73 ± 6) μm to identify the cartilage layer, which is not recognised in the absorption mode without staining. These results suggest that the extension of a commercially available absorption-contrast μCT system via grating interferometry offers the potential to fill the performance gap between LBμCT and phase-contrast μCT using synchrotron radiation in the visualising soft tissues. Although optical microscopy of stained tissue sections enables the quantification of neuron morphology within brain tissues in health and disease, the lateral spatial resolution of histological sections is limited to the wavelength of visible light, while the orthogonal resolution is usually restricted to the section´s thickness. Based on the data acquired from the ID16A-NI, the study demonstrated the application of hard X-ray nano-holotomography with isotropic voxels down to 25 nm for the three dimensional visualising the human cerebellum and neocortex. The images exhibit a reasonable contrast to noise ratio and a spatial resolution of at least 84 nm. Therefore, the three dimensional data resembles the surface images obtained by electron microscopy (EM), but in this case electron dense staining is avoided. The (sub-)cellular structures within the Purkinje, granule, stellate and pyramidal cells of the FFPE tissue blocks were resolved and segmented. Micrometre spatial resolution is routinely achieved at synchrotron radiation facilities worldwide, while reaching the isotropic 100-nm barrier for soft tissues without applying any dedicated contrast agent, labelling or tissue-transformation is a challenge that could set a new standard in non-destructive 3D imaging

    Scene analysis and risk estimation for domestic robots, security and smart homes

    Get PDF
    The evaluation of risk within a scene is a new and emerging area of research. With the advent of smart enabled homes and the continued development and implementation of domestic robotics, the platform for automated risk assessment within the home is now a possibility. The aim of this thesis is to explore a subsection of the problems facing the detection and quantification of risk in a domestic setting. A Risk Estimation framework is introduced which provides a flexible and context aware platform from which measurable elements of risk can be combined to create a final risk score for a scene. To populate this framework, three elements of measurable risk are proposed and evaluated: Firstly, scene stability, assessing the location and stability of objects within an environment through the use of physics simulation techniques. Secondly, hazard feature analysis using two specifically designed novel feature descriptors (3D Voxel HOG and the Physics Behaviour Feature) which determine if the objects within a scene have dangerous or risky properties such as blades or points. Finally, environment interaction, which uses human behaviour simulation to predict human reactions to detected risks and highlight areas of a scene most likely to be visited. Additionally methodologies are introduced to support these concepts including: a simulation prediction framework which reduces the computational cost of physics simulation, a Robust Filter and Complex Adaboost which aim to improve the robustness and training times required for hazard feature classification models. The Human and Group Behaviour Evaluation framework is introduced to provide a platform from which simulation algorithms can be evaluated without the need for extensive ground truth data. Finally the 3D Risk Scenes (3DRS) dataset is introduced, creating a risk specific dataset for the evaluation of future domestic risk analysis methodologies

    Efficient Visibility-driven Medical Image Visualisation via Adaptive Binned Visibility Histogram

    No full text
    ‘Visibility’ is a fundamental optical property that represents the observable, by users, proportion of the voxels in a volume during interactive volume rendering. The manipulation of this ‘visibility’ improves the volume rendering processes; for instance by ensuring the visibility of regions of interest (ROIs) or by guiding the identification of an optimal rendering view-point. The construction of visibility histograms (VHs), which represent the distribution of all the visibility of all voxels in the rendered volume, enables users to explore the volume with real-time feedback about occlusion patterns among spatially related structures during volume rendering manipulations. Volume rendered medical images have been a primary beneficiary of VH given the need to ensure that specific ROIs are visible relative to the surrounding structures, e.g., the visualisation of tumours that may otherwise be occluded by neighbouring structures. VH construction and its subsequent manipulations, however, are computationally expensive due to the histogram binning of the visibilities. This limits the real-time application of VH to medical images that have large intensity ranges and volume dimensions and require a large number of histogram bins. In this study, we introduce an efficient adaptive binned visibility histogram (AB-VH) in which a smaller number of histogram bins are used to represent the visibility distribution of the full VH. We adaptively bin medical images by using a cluster analysis algorithm that groups the voxels according to their intensity similarities into a smaller subset of bins while preserving the distribution of the intensity range of the original images. We increase efficiency by exploiting the parallel computation and multiple render targets (MRT) extension of the modern graphical processing units (GPUs) and this enables efficient computation of the histogram. We show the application of our method to single-modality computed tomography (CT), magnetic resonance (MR) imaging and multi-modality positron emission tomography-CT (PET-CT). In our experiments, the AB-VH markedly improved the computational efficiency for the VH construction and thus improved the subsequent VH-driven volume manipulations. This efficiency was achieved without major degradation in the VH visually and numerical differences between the AB-VH and its full-bin counterpart. We applied several variants of the K-means clustering algorithm with varying Ks (the number of clusters) and found that higher values of K resulted in better performance at a lower computational gain. The AB-VH also had an improved performance when compared to the conventional method of down-sampling of the histogram bins (equal binning) for volume rendering visualisation

    Methods for Real-time Visualization and Interaction with Landforms

    Get PDF
    This thesis presents methods to enrich data modeling and analysis in the geoscience domain with a particular focus on geomorphological applications. First, a short overview of the relevant characteristics of the used remote sensing data and basics of its processing and visualization are provided. Then, two new methods for the visualization of vector-based maps on digital elevation models (DEMs) are presented. The first method uses a texture-based approach that generates a texture from the input maps at runtime taking into account the current viewpoint. In contrast to that, the second method utilizes the stencil buffer to create a mask in image space that is then used to render the map on top of the DEM. A particular challenge in this context is posed by the view-dependent level-of-detail representation of the terrain geometry. After suitable visualization methods for vector-based maps have been investigated, two landform mapping tools for the interactive generation of such maps are presented. The user can carry out the mapping directly on the textured digital elevation model and thus benefit from the 3D visualization of the relief. Additionally, semi-automatic image segmentation techniques are applied in order to reduce the amount of user interaction required and thus make the mapping process more efficient and convenient. The challenge in the adaption of the methods lies in the transfer of the algorithms to the quadtree representation of the data and in the application of out-of-core and hierarchical methods to ensure interactive performance. Although high-resolution remote sensing data are often available today, their effective resolution at steep slopes is rather low due to the oblique acquisition angle. For this reason, remote sensing data are suitable to only a limited extent for visualization as well as landform mapping purposes. To provide an easy way to supply additional imagery, an algorithm for registering uncalibrated photos to a textured digital elevation model is presented. A particular challenge in registering the images is posed by large variations in the photos concerning resolution, lighting conditions, seasonal changes, etc. The registered photos can be used to increase the visual quality of the textured DEM, in particular at steep slopes. To this end, a method is presented that combines several georegistered photos to textures for the DEM. The difficulty in this compositing process is to create a consistent appearance and avoid visible seams between the photos. In addition to that, the photos also provide valuable means to improve landform mapping. To this end, an extension of the landform mapping methods is presented that allows the utilization of the registered photos during mapping. This way, a detailed and exact mapping becomes feasible even at steep slopes

    Optical trapping: optical interferometric metrology and nanophotonics

    Get PDF
    The two main themes in this thesis are the implementation of interference methods with optically trapped particles for measurements of position and optical phase (optical interferometric metrology) and the optical manipulation of nanoparticles for studies in the assembly of nanostructures, nanoscale heating and nonlinear optics (nanophotonics). The first part of the thesis (chapter 1, 2) provides an introductory overview to optical trapping and describes the basic experimental instrument used in the thesis respectively. The second part of the thesis (chapters 3 to 5) investigates the use of optical interferometric patterns of the diffracting light fields from optically trapped microparticles for three types of measurements: calibrating particle positions in an optical trap, determining the stiffness of an optical trap and measuring the change in phase or coherence of a given light field. The third part of the thesis (chapters 6 to 8) studies the interactions between optical traps and nanoparticles in three separate experiments: the optical manipulation of dielectric enhanced semiconductor nanoparticles, heating of optically trapped gold nanoparticles and collective optical response from an ensemble of optically trapped dielectric nanoparticles
    corecore