65,529 research outputs found

    Medical images modality classification using multi-scale dictionary learning

    Get PDF
    In this paper, we proposed a method for classification of medical images captured by different sensors (modalities) based on multi-scale wavelet representation using dictionary learning. Wavelet features extracted from an image provide discrimination useful for classification of medical images, namely, diffusion tensor imaging (DTI), magnetic resonance imaging (MRI), magnetic resonance angiography (MRA) and functional magnetic resonance imaging (FRMI). The ability of On-line dictionary learning (ODL) to achieve sparse representation of an image is exploited to develop dictionaries for each class using multi-scale representation (wavelets) feature. An experimental analysis performed on a set of images from the ICBM medical database demonstrates efficacy of the proposed method

    Reducible conformal holonomy in any metric signature and application to twistor spinors in low dimension

    Full text link
    We prove that given a pseudo-Riemannian conformal structure whose conformal holonomy representation fixes a totally lightlike subspace of arbitrary dimension, there is, wrt. a local metric in the conformal class defined off a singular set, a parallel, totally lightlike distribution on the tangent bundle which contains the image of the Ricci-tensor. This generalizes results obtained for invariant lightlike lines and planes and closes a gap in the understanding of the geometric meaning of reducibly acting conformal holonomy groups. We show how this result naturally applies to the classification of geometries admitting twistor spinors in some low-dimensional split signatures when they are described using conformal spin tractor calculus. Together with already known results about generic distributions in dimensions 5 and 6 we obtain a complete geometric description of local geometries admitting real twistor spinors in signatures (3,2) and (3,3). In contrast to the generic case where generic geometric distributions play an important role, the underlying geometries in the non-generic case without zeroes turn out to admit integrable distributions.Comment: 15 page

    Tensor-Based Algorithms for Image Classification

    Get PDF
    Interest in machine learning with tensor networks has been growing rapidly in recent years. We show that tensor-based methods developed for learning the governing equations of dynamical systems from data can, in the same way, be used for supervised learning problems and propose two novel approaches for image classification. One is a kernel-based reformulation of the previously introduced multidimensional approximation of nonlinear dynamics (MANDy), the other an alternating ridge regression in the tensor train format. We apply both methods to the MNIST and fashion MNIST data set and show that the approaches are competitive with state-of-the-art neural network-based classifiers

    End-to-End Learning of Representations for Asynchronous Event-Based Data

    Full text link
    Event cameras are vision sensors that record asynchronous streams of per-pixel brightness changes, referred to as "events". They have appealing advantages over frame-based cameras for computer vision, including high temporal resolution, high dynamic range, and no motion blur. Due to the sparse, non-uniform spatiotemporal layout of the event signal, pattern recognition algorithms typically aggregate events into a grid-based representation and subsequently process it by a standard vision pipeline, e.g., Convolutional Neural Network (CNN). In this work, we introduce a general framework to convert event streams into grid-based representations through a sequence of differentiable operations. Our framework comes with two main advantages: (i) allows learning the input event representation together with the task dedicated network in an end to end manner, and (ii) lays out a taxonomy that unifies the majority of extant event representations in the literature and identifies novel ones. Empirically, we show that our approach to learning the event representation end-to-end yields an improvement of approximately 12% on optical flow estimation and object recognition over state-of-the-art methods.Comment: To appear at ICCV 201
    corecore