4 research outputs found

    Screened poisson hyperfields for shape coding

    Get PDF
    We present a novel perspective on shape characterization using the screened Poisson equation. We discuss that the effect of the screening parameter is a change of measure of the underlying metric space. Screening also indicates a conditioned random walker biased by the choice of measure. A continuum of shape fields is created by varying the screening parameter or, equivalently, the bias of the random walker. In addition to creating a regional encoding of the diffusion with a different bias, we further break down the influence of boundary interactions by considering a number of independent random walks, each emanating from a certain boundary point, whose superposition yields the screened Poisson field. Probing the screened Poisson equation from these two complementary perspectives leads to a high-dimensional hyperfield: a rich characterization of the shape that encodes global, local, interior, and boundary interactions. To extract particular shape information as needed in a compact way from the hyperfield, we apply various decompositions either to unveil parts of a shape or parts of a boundary or to create consistent mappings. The latter technique involves lower-dimensional embeddings, which we call screened Poisson encoding maps (SPEM). The expressive power of the SPEM is demonstrated via illustrative experiments as well as a quantitative shape retrieval experiment over a public benchmark database on which the SPEM method shows a high-ranking performance among the existing state-of-the-art shape retrieval methods

    Coding shape inside the shape

    Get PDF
    The shape of an object lies at the interface between vision and cognition, yet the field of statistical shape analysis is far from developing a general mathematical model to represent shapes that would allow computational descriptions to express some simple tasks that are carried out robustly and e↔ortlessly by humans. In this thesis, novel perspectives on shape characterization are presented where the shape information is encoded inside the shape. The representation is free from the dimensions of the shape, hence the model is readily extendable to any shape embedding dimensions (i.e 2D, 3D, 4D). A very desirable property is that the representation possesses the possibility to fuse shape information with other types of information available inside the shape domain, an example would be reflectance information from an optical camera. Three novel fields are proposed within the scope of the thesis, namely ‘Scalable Fluctuating Distance Fields’, ‘Screened Poisson Hyperfields’, ‘Local Convexity Encoding Fields’, which are smooth fields that are obtained by encoding desired shape information. ‘Scalable Fluctuating Distance Fields’, that encode parts explicitly, is presented as an interactive tool for tumor protrusion segmentation and as an underlying representation for tumor follow-up analysis. Secondly, ‘Screened Poisson Hyper-Fields’, provide a rich characterization of the shape that encodes global, local, interior and boundary interactions. Low-dimensional embeddings of the hyper-fields are employed to address problems of shape partitioning, 2D shape classification and 3D non-rigid shape retrieval. Moreover, the embeddings are used to translate the shape matching problem into an image matching problem, utilizing existing arsenal of image matching tools that could not be utilized in shape matching before. Finally, the ‘Local Convexity Encoding Fields’ is formed by encoding information related to local symmetry and local convexity-concavity properties. The representation performance of the shape fields is presented both qualitatively and quantitatively. The descriptors obtained using the regional encoding perspective outperform existing state-of-the-art shape retrieval methods over public benchmark databases, which is highly motivating for further study of regional-volumetric shape representations

    Novel mathematical methods for analysis of brain white matter fibers using diffusion MRI

    Get PDF
    White matter fibers connect and transfer information among various gray matter regions of the brain. Diffusion Magnetic Resonance Imaging (DMRI) allows in-vivo estimation of fiber orientations. From the estimated orientations, a 3D curve representation of the trajectory of fibers can be reconstructed in a process known as tractography. Automatic classification of these \tracts" into classes of anatomically known fiber bundles is a very important problem in neuroimage computing. In this thesis, three automatic fiber classification methods are proposed. The first two are based on combining neuroanatomical priors with density-based clustering. The first method includes brainstem heuristics but the second is more general and can be applied to any fiber pathway in the brain. Further, the second method introduces a novel fiber representation, Neighborhood Resolved Fiber Orientation Distribution(NRFOD), that represents a tract as a set of histograms that encode the distribution of fiber orientations in its neighborhood. The third method utilizes the NRFOD representation to directly map a tract to a probability estimate for each bundle class in a supervised classification framework. A practical training and validation set creation methodology is proposed. Additionally, the thesis includes statistical significance tests to investigate whether the structural change between pre-operative and post-operative fiber bundles after a tumor resection operation are related to the change in patient's cognitive performance scores. To this end, a fiber bundle to fiber bundle registration method and various quantitative measures of the structural change are proposed. We present results over DMRI data with clinical evaluations of 30 patients with brainstem tumors

    Deep Shape Representations for 3D Object Recognition

    Get PDF
    Deep learning is a rapidly growing discipline that models high-level features in data as multilayered neural networks. The recent trend toward deep neural networks has been driven, in large part, by a combination of affordable computing hardware, open source software, and the availability of pre-trained networks on large-scale datasets. In this thesis, we propose deep learning approaches to 3D shape recognition using a multilevel feature learning paradigm. We start by comprehensively reviewing recent shape descriptors, including hand-crafted descriptors that are mostly developed in the spectral geometry setting and also the ones obtained via learning-based methods. Then, we introduce novel multi-level feature learning approaches using spectral graph wavelets, bag-of-features and deep learning. Low-level features are first extracted from a 3D shape using spectral graph wavelets. Mid-level features are then generated via the bag-of-features model by employing locality-constrained linear coding as a feature coding method, in conjunction with the biharmonic distance and intrinsic spatial pyramid matching in a bid to effectively measure the spatial relationship between each pair of the bag-offeature descriptors. For the task of 3D shape retrieval, high-level shape features are learned via a deep auto-encoder on mid-level features. Then, we compare the deep learned descriptor of a query shape to the descriptors of all shapes in the dataset using a dissimilarity measure for 3D shape retrieval. For the task of 3D shape classification, mid-level features are represented as 2D images in order to be fed into a pre-trained convolutional neural network to learn high-level features from the penultimate fully-connected layer of the network. Finally, a multiclass support vector machine classifier is trained on these deep learned descriptors, and the classification accuracy is subsequently computed. The proposed 3D shape retrieval and classification approaches are evaluated on three standard 3D shape benchmarks through extensive experiments, and the results show compelling superiority of our approaches over state-of-the-art methods
    corecore