5,336 research outputs found
Visualisation of multi-dimensional medical images with application to brain electrical impedance tomography
Medical imaging plays an important role in modem medicine. With the increasing complexity and information presented by medical images, visualisation is vital for medical research and clinical applications to interpret the information presented in these images. The aim of this research is to investigate improvements to medical image visualisation, particularly for multi-dimensional medical image datasets. A recently
developed medical imaging technique known as Electrical Impedance Tomography (EIT) is presented as a demonstration. To fulfil the aim, three main efforts are included in this work.
First, a novel scheme for the processmg of brain EIT data with SPM (Statistical Parametric Mapping) to detect ROI (Regions of Interest) in the data is proposed based on a theoretical analysis. To evaluate the feasibility of this scheme, two types of experiments are carried out: one is implemented with simulated EIT data, and the other is performed with human brain EIT data under visual stimulation. The experimental
results demonstrate that: SPM is able to localise the expected ROI in EIT data correctly; and it is reasonable to use the balloon hemodynamic change model to simulate the
impedance change during brain function activity.
Secondly, to deal with the absence of human morphology information in EIT visualisation, an innovative landmark-based registration scheme is developed to register brain EIT image with a standard anatomical brain atlas.
Finally, a new task typology model is derived for task exploration in medical image visualisation, and a task-based system development methodology is proposed for the visualisation of multi-dimensional medical images. As a case study, a prototype visualisation system, named EIT5DVis, has been developed, following this methodology. to visualise five-dimensional brain EIT data. The EIT5DVis system is able to accept visualisation tasks through a graphical user interface; apply appropriate methods to analyse tasks, which include the ROI detection approach and registration scheme mentioned in the preceding paragraphs; and produce various visualisations
Recommended from our members
Development of advanced 3D medical analysis tools for clinical training, diagnosis and treatment
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The objective of this PhD research was the development of novel 3D interactive medical platforms for medical image analysis, simulation and visualisation, with a focus on oncology images to support clinicians in managing the increasing amount of data provided by several medical image modalities.
DoctorEye and Automatic Tumour Detector platforms were developed through constant interaction and feedback from expert clinicians, integrating a number of innovations in algorithms and methods, concerning image handling, segmentation, annotation, visualisation and plug-in technologies. DoctorEye is already being used in a related tumour modelling EC project (ContraCancrum) and offers several robust algorithms and tools for fast annotation, 3D visualisation and measurements to assist the clinician in better understanding the pathology of the brain area and define the treatment. It is free to use upon request and offers a user friendly environment for clinicians as it simplifies the implementation of complex algorithms and methods. It integrates a sophisticated, simple-to-use plug-in technology allowing researchers to add algorithms and methods (e.g. tumour growth and simulation algorithms for improving therapy planning) and interactively check the results. Apart from diagnostic and research purposes, it supports clinical training as it allows an expert clinician to evaluate a clinical delineation by different clinical users. The Automatic Tumour Detector focuses on abdominal images, which are more complex than those of the brain. It supports full automatic 3D detection of kidney pathology in real-time as well as 3D advanced visualisation and measurements. This is achieved through an innovative method implementing Templates. They contain rules and parameters for the Automatic Recognition Framework defined interactively by engineers based on clinicians’ 3D Golden Standard models. The Templates enable the automatic detection of kidneys and their possible abnormalities (tumours, stones and cysts). The system also supports the transmission of these Templates to another expert for a second opinion. Future versions of the proposed platforms could integrate even more sophisticated algorithms and tools and offer fully computer-aided identification of a variety of other organs and their dysfunctions
Ubiquitous volume rendering in the web platform
176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium
Predicting Slice-to-Volume Transformation in Presence of Arbitrary Subject Motion
This paper aims to solve a fundamental problem in intensity-based 2D/3D
registration, which concerns the limited capture range and need for very good
initialization of state-of-the-art image registration methods. We propose a
regression approach that learns to predict rotation and translations of
arbitrary 2D image slices from 3D volumes, with respect to a learned canonical
atlas co-ordinate system. To this end, we utilize Convolutional Neural Networks
(CNNs) to learn the highly complex regression function that maps 2D image
slices into their correct position and orientation in 3D space. Our approach is
attractive in challenging imaging scenarios, where significant subject motion
complicates reconstruction performance of 3D volumes from 2D slice data. We
extensively evaluate the effectiveness of our approach quantitatively on
simulated MRI brain data with extreme random motion. We further demonstrate
qualitative results on fetal MRI where our method is integrated into a full
reconstruction and motion compensation pipeline. With our CNN regression
approach we obtain an average prediction error of 7mm on simulated data, and
convincing reconstruction quality of images of very young fetuses where
previous methods fail. We further discuss applications to Computed Tomography
and X-ray projections. Our approach is a general solution to the 2D/3D
initialization problem. It is computationally efficient, with prediction times
per slice of a few milliseconds, making it suitable for real-time scenarios.Comment: 8 pages, 4 figures, 6 pages supplemental material, currently under
review for MICCAI 201
Ubiquitous volume rendering in the web platform
176 p.The main thesis hypothesis is that ubiquitous volume rendering can be achieved using WebGL. The thesis enumerates the challenges that should be met to achieve that goal. The results allow web content developers the integration of interactive volume rendering within standard HTML5 web pages. Content developers only need to declare the X3D nodes that provide the rendering characteristics they desire. In contrast to the systems that provide specific GPU programs, the presented architecture creates automatically the GPU code required by the WebGL graphics pipeline. This code is generated directly from the X3D nodes declared in the virtual scene. Therefore, content developers do not need to know about the GPU.The thesis extends previous research on web compatible volume data structures for WebGL, ray-casting hybrid surface and volumetric rendering, progressive volume rendering and some specific problems related to the visualization of medical datasets. Finally, the thesis contributes to the X3D standard with some proposals to extend and improve the volume rendering component. The proposals are in an advance stage towards their acceptance by the Web3D Consortium
An Investigation towards Challenges in medical image processing
Imaging is important in today's healthcare since it is used at every stage of the clinical process, from diagnosis and treatment planning to surgery and follow-up investigations. Large data volumes provide issues for medical image processing because most imaging modalities have gone completely digital with ever- increasing resolution. This work, address difficulties in the range of Kilo- to Terabytes related to bioimaging, virtual reality in medical visualisations, bioimage management, and neuroimaging. Algorithms for image processing and visualisation must be modified due to the growing volume of data. With the aid of graphical processing units, scalable algorithms and sophisticated parallelization strategies have been created. This publication provides a summary of them. Although these methods are managing the difficulty from Kilo to Terabyte, the Petabyte level is quickly approaching. Medical image processing is still an important area of study because of this
Gebiss: an ImageJ plugin for the specification of ground truth and the performance evaluation of 3D segmentation algorithms.
Background: Image segmentation is a crucial step in quantitative microscopy that helps to define regions of tissues, cells or subcellular compartments. Depending on the degree of user interactions, segmentation methods can be divided into manual, automated or semi-automated approaches. 3D image stacks usually require automated methods due to their large number of optical sections. However, certain applications benefit from manual or semi-automated approaches. Scenarios include the quantification of 3D images with poor signal-to-noise ratios or the generation of so-called ground truth segmentations that are used to evaluate the accuracy of automated segmentation methods.
Results: We have developed Gebiss; an ImageJ plugin for the interactive segmentation, visualisation and quantification of 3D microscopic image stacks. We integrated a variety of existing plugins for threshold-based segmentation and volume visualisation.
Conclusions: We demonstrate the application of Gebiss to the segmentation of nuclei in live Drosophila embryos and the quantification of neurodegeneration in Drosophila larval brains. Gebiss was developed as a cross-platform ImageJ plugin and is freely available on the web at http://imaging.bii.a-star.edu.sg/projects/gebiss
- …