348 research outputs found

    Robust Temporally Coherent Laplacian Protrusion Segmentation of 3D Articulated Bodies

    Get PDF
    In motion analysis and understanding it is important to be able to fit a suitable model or structure to the temporal series of observed data, in order to describe motion patterns in a compact way, and to discriminate between them. In an unsupervised context, i.e., no prior model of the moving object(s) is available, such a structure has to be learned from the data in a bottom-up fashion. In recent times, volumetric approaches in which the motion is captured from a number of cameras and a voxel-set representation of the body is built from the camera views, have gained ground due to attractive features such as inherent view-invariance and robustness to occlusions. Automatic, unsupervised segmentation of moving bodies along entire sequences, in a temporally-coherent and robust way, has the potential to provide a means of constructing a bottom-up model of the moving body, and track motion cues that may be later exploited for motion classification. Spectral methods such as locally linear embedding (LLE) can be useful in this context, as they preserve "protrusions", i.e., high-curvature regions of the 3D volume, of articulated shapes, while improving their separation in a lower dimensional space, making them in this way easier to cluster. In this paper we therefore propose a spectral approach to unsupervised and temporally-coherent body-protrusion segmentation along time sequences. Volumetric shapes are clustered in an embedding space, clusters are propagated in time to ensure coherence, and merged or split to accommodate changes in the body's topology. Experiments on both synthetic and real sequences of dense voxel-set data are shown. This supports the ability of the proposed method to cluster body-parts consistently over time in a totally unsupervised fashion, its robustness to sampling density and shape quality, and its potential for bottom-up model constructionComment: 31 pages, 26 figure

    Risk analysis for smart homes and domestic robots using robust shape and physics descriptors, and complex boosting techniques

    Get PDF
    In this paper, the notion of risk analysis within 3D scenes using vision based techniques is introduced. In particular the problem of risk estimation of indoor environments at the scene and object level is considered, with applications in domestic robots and smart homes. To this end, the proposed Risk Estimation Framework is described, which provides a quantified risk score for a given scene. This methodology is extended with the introduction of a novel robust kernel for 3D shape descriptors such as 3D HOG and SIFT3D, which aims to reduce the effects of outliers in the proposed risk recognition methodology. The Physics Behaviour Feature (PBF) is presented, which uses an object's angular velocity obtained using Newtonian physics simulation as a descriptor. Furthermore, an extension of boosting techniques for learning is suggested in the form of the novel Complex and Hyper-Complex Adaboost, which greatly increase the computation efficiency of the original technique. In order to evaluate the proposed robust descriptors an enriched version of the 3D Risk Scenes (3DRS) dataset with extra objects, scenes and meta-data was utilised. A comparative study was conducted demonstrating that the suggested approach outperforms current state-of-the-art descriptors

    Multimodal Image Fusion and Its Applications.

    Full text link
    Image fusion integrates different modality images to provide comprehensive information of the image content, increasing interpretation capabilities and producing more reliable results. There are several advantages of combining multi-modal images, including improving geometric corrections, complementing data for improved classification, and enhancing features for analysis...etc. This thesis develops the image fusion idea in the context of two domains: material microscopy and biomedical imaging. The proposed methods include image modeling, image indexing, image segmentation, and image registration. The common theme behind all proposed methods is the use of complementary information from multi-modal images to achieve better registration, feature extraction, and detection performances. In material microscopy, we propose an anomaly-driven image fusion framework to perform the task of material microscopy image analysis and anomaly detection. This framework is based on a probabilistic model that enables us to index, process and characterize the data with systematic and well-developed statistical tools. In biomedical imaging, we focus on the multi-modal registration problem for functional MRI (fMRI) brain images which improves the performance of brain activation detection.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/120701/1/yuhuic_1.pd

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Feature-driven Volume Visualization of Medical Imaging Data

    Get PDF
    Direct volume rendering (DVR) is a volume visualization technique that has been proved to be a very powerful tool in many scientific visualization domains. Diagnostic medical imaging is one such domain in which DVR provides new capabilities for the analysis of complex cases and improves the efficiency of image interpretation workflows. However, the full potential of DVR in the medical domain has not yet been realized. A major obstacle for a better integration of DVR in the medical domain is the time-consuming process to optimize the rendering parameters that are needed to generate diagnostically relevant visualizations in which the important features that are hidden in image volumes are clearly displayed, such as shape and spatial localization of tumors, its relationship with adjacent structures, and temporal changes in the tumors. In current workflows, clinicians must manually specify the transfer function (TF), view-point (camera), clipping planes, and other visual parameters. Another obstacle for the adoption of DVR to the medical domain is the ever increasing volume of imaging data. The advancement of imaging acquisition techniques has led to a rapid expansion in the size of the data, in the forms of higher resolutions, temporal imaging acquisition to track treatment responses over time, and an increase in the number of imaging modalities that are used for a single procedure. The manual specification of the rendering parameters under these circumstances is very challenging. This thesis proposes a set of innovative methods that visualize important features in multi-dimensional and multi-modality medical images by automatically or semi-automatically optimizing the rendering parameters. Our methods enable visualizations necessary for the diagnostic procedure in which 2D slice of interest (SOI) can be augmented with 3D anatomical contextual information to provide accurate spatial localization of 2D features in the SOI; the rendering parameters are automatically computed to guarantee the visibility of 3D features; and changes in 3D features can be tracked in temporal data under the constraint of consistent contextual information. We also present a method for the efficient computation of visibility histograms (VHs) using adaptive binning, which allows our optimal DVR to be automated and visualized in real-time. We evaluated our methods by producing visualizations for a variety of clinically relevant scenarios and imaging data sets. We also examined the computational performance of our methods for these scenarios

    Facilitating the design of multidimensional and local transfer functions for volume visualization

    Get PDF
    The importance of volume visualization is increasing since the sizes of the datasets that need to be inspected grow with every new version of medical scanners (e.g., CT and MR). Direct volume rendering is a 3D visualization technique that has, in many cases, clear benefits over 2D views. It is able to show 3D information, facilitating mental reconstruction of the 3D shape of objects and their spatial relation. The complexity of the settings required in order to generate a 3D rendering is, however, one of the main reasons for this technique not being used more widely in practice. Transfer functions play an important role in the appearance of volume rendered images by determining the optical properties of each piece of the data. The transfer function determines what will be seen and how. The goal of the project on which this PhD thesis reports was to develop and investigate new approaches that would facilitate the setting of transfer functions. As shown in the state of the art overview in Chapter 2, there are two main aspects that influence the effectiveness of a TF: the choice of the TF domain and the process of defining the shape of the TF. The choice of a TF domain, i.e., the choice of the data properties used, directly determines which aspects of the volume data can be visualized. In many approaches, special attention is given to TF domains that would enable an easier selection and visualization of boundaries between materials. The boundaries are an important aspect of the volume data since they reveal the shapes and sizes of objects. Our research in improving the TF definition focused on introducing new user interaction methods and automation techniques that shield the user from the complex process of manually defining the shape and color properties of TFs. Our research dealt with both the TF domain and the TF definition since they are closely related. A suitable TF domain cannot only greatly improve the manual definition, but also, more importantly, increases the possibilities of using automated techniques. Chapter 3 presents a new TF domain. We have used the LH space and the associated LH histogram for TFs based on material boundaries. We showed that the LH space reduces the ambiguity when selecting boundaries compared to the commonly used space of the data value and gradient magnitude. Fur- thermore, boundaries appear as blobs in the LH histogram that make them easier to select. Its compactness and easier selectivity of the boundaries makes the LH histogram suitable for the introduction of clustering-based automation. The mirrored extension of the LH space differentiates between both sides of the boundary. The mirrored LH histogram shows interesting properties of this space, allowing the selection of all boundaries belonging to one material in an easy way. We have also shown that segmentation techniques, such as region growing methods, can benefit from the properties of LH space. Standard cost functions based on the data value and/or the gradient magnitude may experience problems at the boundaries due to the partial volume effect. However, our cost function that is based on the LH space is, however, capable of handling the region growing of boundaries better. Chapter 4 presents an interaction framework for the TF definition based on hierarchical clustering of material boundaries. Our framework aims at an easy combination of various similarity measures that reflect requirements of the user. One of the main benefits of the framework is the absence of similarity-weighting coefficients that are usually hard to define. Further, the framework enables the user to visualize objects that may exist at different levels of the hierarchy. We also introduced two similarity measures that illustrate the functionality of the framework. The main contribution is the first similarity measure that takes advantage of properties of the LH histogram from Chapter 3. We assumed that the shapes of the peaks in the LH histogram can guide the grouping of clusters. The second similarity measure is based on the spatial relationships of clusters. In Chapter 5, we presented part of our research that focused on one of the main issues encountered in the TFs in general. Standard TFs, as they are applied everywhere in the volume in the same way, become difficult to use when the data properties (measurements) of the same material vary over the volume, for example, due to the acquisition inaccuracies. We address this problem by introducing the concept and framework of local transfer functions (LTFs). Local transfer functions are based on using locally applicable TFs in cases where it might be difficult or impossible to define a globally applicable TF. We discussed a number of reasons that hamper the global TF and illustrated how the LTFs may help to alleviate these problems. We have also discussed how multiple TFs can be combined and automatically adapted. One of our contributions is the use of the similarity of local histograms and their correlation for the combination and adaptation of LTFs

    Efficient automatic correction and segmentation based 3D visualization of magnetic resonance images

    Get PDF
    In the recent years, the demand for automated processing techniques for digital medical image volumes has increased substantially. Existing algorithms, however, still often require manual interaction, and newly developed automated techniques are often intended for a narrow segment of processing needs. The goal of this research was to develop algorithms suitable for fast and effective correction and advanced visualization of digital MR image volumes with minimal human operator interaction. This research has resulted in a number of techniques for automated processing of MR image volumes, including a novel MR inhomogeneity correction algorithm derivative surface fitting (dsf), automatic tissue detection algorithm (atd), and a new fast technique for interactive 3D visualization of segmented volumes called gravitational shading (gs). These newly developed algorithms provided a foundation for the automated MR processing pipeline incorporated into the UniViewer medical imaging software developed in our group and available to the public. This allowed the extensive testing and evaluation of the proposed techniques. Dsf was compared with two previously published methods on 17 digital image volumes. Dsf demonstrated faster correction speeds and uniform image quality improvement in this comparison. Dsf was the only algorithm that did not remove anatomic detail. Gs was compared with the previously published algorithm fsvr and produced rendering quality improvement while preserving real-time frame-rates. These results show that the automated pipeline design principles used in this dissertation provide necessary tools for development of a fast and effective system for the automated correction and visualization of digital MR image volumes

    Realistic Virtual Cuts

    Get PDF

    Automated Discrimination of Brain Pathological State Attending to Complex Structural Brain Network Properties: The Shiverer Mutant Mouse Case

    Get PDF
    Neuroimaging classification procedures between normal and pathological subjects are sparse and highly dependent of an expert's clinical criterion. Here, we aimed to investigate whether possible brain structural network differences in the shiverer mouse mutant, a relevant animal model of myelin related diseases, can reflect intrinsic individual brain properties that allow the automatic discrimination between the shiverer and normal subjects. Common structural networks properties between shiverer (C3Fe.SWV Mbpshi/Mbpshi, n = 6) and background control (C3HeB.FeJ, n = 6) mice are estimated and compared by means of three diffusion weighted MRI (DW-MRI) fiber tractography algorithms and a graph framework. Firstly, we found that brain networks of control group are significantly more clustered, modularized, efficient and optimized than those of the shiverer group, which presented significantly increased characteristic path length. These results are in line with previous structural/functional complex brain networks analysis that have revealed topologic differences and brain network randomization associated to specific states of human brain pathology. In addition, by means of network measures spatial representations and discrimination analysis, we show that it is possible to classify with high accuracy to which group each subject belongs, providing also a probability value of being a normal or shiverer subject as an individual anatomical classifier. The obtained correct predictions (e.g., around 91.6–100%) and clear spatial subdivisions between control and shiverer mice, suggest that there might exist specific network subspaces corresponding to specific brain disorders, supporting also the point of view that complex brain network analyses constitutes promising tools in the future creation of interpretable imaging biomarkers

    Multimodal image analysis of the human brain

    Get PDF
    Gedurende de laatste decennia heeft de snelle ontwikkeling van multi-modale en niet-invasieve hersenbeeldvorming technologieën een revolutie teweeg gebracht in de mogelijkheid om de structuur en functionaliteit van de hersens te bestuderen. Er is grote vooruitgang geboekt in het beoordelen van hersenschade door gebruik te maken van Magnetic Reconance Imaging (MRI), terwijl Elektroencefalografie (EEG) beschouwd wordt als de gouden standaard voor diagnose van neurologische afwijkingen. In deze thesis focussen we op de ontwikkeling van nieuwe technieken voor multi-modale beeldanalyse van het menselijke brein, waaronder MRI segmentatie en EEG bronlokalisatie. Hierdoor voegen we theorie en praktijk samen waarbij we focussen op twee medische applicaties: (1) automatische 3D MRI segmentatie van de volwassen hersens en (2) multi-modale EEG-MRI data analyse van de hersens van een pasgeborene met perinatale hersenschade. We besteden veel aandacht aan de verbetering en ontwikkeling van nieuwe methoden voor accurate en ruisrobuuste beeldsegmentatie, dewelke daarna succesvol gebruikt worden voor de segmentatie van hersens in MRI van zowel volwassen als pasgeborenen. Daarenboven ontwikkelden we een geïntegreerd multi-modaal methode voor de EEG bronlokalisatie in de hersenen van een pasgeborene. Deze lokalisatie wordt gebruikt voor de vergelijkende studie tussen een EEG aanval bij pasgeborenen en acute perinatale hersenletsels zichtbaar in MRI
    • …
    corecore