27 research outputs found

    3D reconstruction of medical images from slices automatically landmarked with growing neural models

    Get PDF
    In this study, we utilise a novel approach to segment out the ventricular system in a series of high resolution T1-weighted MR images. We present a brain ventricles fast reconstruction method. The method is based on the processing of brain sections and establishing a fixed number of landmarks onto those sections to reconstruct the ventricles 3D surface. Automated landmark extraction is accomplished through the use of the self-organising network, the growing neural gas (GNG), which is able to topographically map the low dimensionality of the network to the high dimensionality of the contour manifold without requiring a priori knowledge of the input space structure. Moreover, our GNG landmark method is tolerant to noise and eliminates outliers. Our method accelerates the classical surface reconstruction and filtering processes. The proposed method offers higher accuracy compared to methods with similar efficiency as Voxel Grid

    Brain tumor visualization for magnetic resonance images using modified shape-based interpolation method

    Get PDF
    3D visualization plays an essential role in medical diagnosis and setting treatment plans especially for brain cancer. There have been many attempts for brain tumor reconstruction and visualization using various techniques. However, this problem is still considered unsolved as more accurate results are needed in this critical field. In this paper, a sequence of 2D slices of brain magnetic resonance Images was used to reconstruct a 3D model for the brain tumor. The images were automatically segmented using a wavelet multi-resolution expectation maximization algorithm. Then, the inter-slice gaps were interpolated using the proposed modified shape-based interpolation method. The method involves three main steps; transferring the binary tumor images to distance images using a suitable distance function, interpolating the distance images using cubic spline interpolation and thresholding the interpolated values to get the reconstructed slices. The final tumor is then visualized as a 3D isosurface. We evaluated the proposed method by removing an original slice from the input images and interpolating it, the results outperform the original shape-based interpolation method by an average of 3% reaching 99% of accuracy for some slice images

    Correlative light and electron microscopy: new strategies for improved throughput and targeting precision

    Get PDF
    The need for quantitative analysis is crucial when studying fundamental mechanisms in cell biology. Common assays consist of interfering with a system via protein knockdowns or drug treatments. These very often lead to important response variability that is generally addressed by analyzing large populations. Whilst the imaging throughput in light microscopy (LM) is high enough for such large screens, electron microscopy (EM) still lags behind and is not adapted to collect large amounts of data from highly heterogeneous cell populations. Nevertheless, EM is the only technique that offers high-resolution imaging of the entire subcellular context. Correlative light and electron microscopy (CLEM) has made it possible to look at rare events or addressing heterogeneous populations. Our goal is to develop new strategies in CLEM. More specifically, we aim at automatizing the processes of screening large cell populations (living cells or pre-fixed), identifying the sub-populations of interest by LM, targeting these by EM and measuring the key components of the subcellular organization. New 3D-EM techniques like focused ion beam - scanning electron microscopy (FIB-SEM) enable a high degree of automation for the acquisition of high-resolution, full cell datasets. So far, this has only been applied to individual target volumes, often isotropic and has not been designed to acquire multiple regions of interest. The ability to acquire full cells with up to 5 nm x 5 nm x 5 nm voxel size (x, y referring to pixel size, z referring to slice thickness), leads to the accumulation of large datasets. Their analysis involves tedious manual segmentation or so far not well established automated segmentation algorithms. To enable the analysis and quantification of an extensive amount of data, we decided to explore the potential of stereology protocols in combination with automated acquisition in the FIB-SEM. Instead of isotropic datasets, a few evenly spaced sections are used to quantify subcellular structures. Our strategy therefore combines CLEM, 3D-EM and stereology to collect and analyze large amounts of cells selected based on their phenotype as visible by fluorescence microscopy. We demonstrate the power of the approach in a systematic screen of the Golgi apparatus morphology upon alteration of the expression of 10 proteins, plus negative and positive control. In parallel to this core project, we demonstrate the power of combining correlative approaches with 3D-EM for the detailed structural analysis of fundamental cell biology events during cell division and also for the understanding on complex physiological transitions in a multicellular model organism

    Geometric Expression Invariant 3D Face Recognition using Statistical Discriminant Models

    No full text
    Currently there is no complete face recognition system that is invariant to all facial expressions. Although humans find it easy to identify and recognise faces regardless of changes in illumination, pose and expression, producing a computer system with a similar capability has proved to be particularly di cult. Three dimensional face models are geometric in nature and therefore have the advantage of being invariant to head pose and lighting. However they are still susceptible to facial expressions. This can be seen in the decrease in the recognition results using principal component analysis when expressions are added to a data set. In order to achieve expression-invariant face recognition systems, we have employed a tensor algebra framework to represent 3D face data with facial expressions in a parsimonious space. Face variation factors are organised in particular subject and facial expression modes. We manipulate this using single value decomposition on sub-tensors representing one variation mode. This framework possesses the ability to deal with the shortcomings of PCA in less constrained environments and still preserves the integrity of the 3D data. The results show improved recognition rates for faces and facial expressions, even recognising high intensity expressions that are not in the training datasets. We have determined, experimentally, a set of anatomical landmarks that best describe facial expression e ectively. We found that the best placement of landmarks to distinguish di erent facial expressions are in areas around the prominent features, such as the cheeks and eyebrows. Recognition results using landmark-based face recognition could be improved with better placement. We looked into the possibility of achieving expression-invariant face recognition by reconstructing and manipulating realistic facial expressions. We proposed a tensor-based statistical discriminant analysis method to reconstruct facial expressions and in particular to neutralise facial expressions. The results of the synthesised facial expressions are visually more realistic than facial expressions generated using conventional active shape modelling (ASM). We then used reconstructed neutral faces in the sub-tensor framework for recognition purposes. The recognition results showed slight improvement. Besides biometric recognition, this novel tensor-based synthesis approach could be used in computer games and real-time animation applications
    corecore