2 research outputs found
ManifoldNet: A Deep Network Framework for Manifold-valued Data
Deep neural networks have become the main work horse for many tasks involving
learning from data in a variety of applications in Science and Engineering.
Traditionally, the input to these networks lie in a vector space and the
operations employed within the network are well defined on vector-spaces. In
the recent past, due to technological advances in sensing, it has become
possible to acquire manifold-valued data sets either directly or indirectly.
Examples include but are not limited to data from omnidirectional cameras on
automobiles, drones etc., synthetic aperture radar imaging, diffusion magnetic
resonance imaging, elastography and conductance imaging in the Medical Imaging
domain and others. Thus, there is need to generalize the deep neural networks
to cope with input data that reside on curved manifolds where vector space
operations are not naturally admissible. In this paper, we present a novel
theoretical framework to generalize the widely popular convolutional neural
networks (CNNs) to high dimensional manifold-valued data inputs. We call these
networks, ManifoldNets.
In ManifoldNets, convolution operation on data residing on Riemannian
manifolds is achieved via a provably convergent recursive computation of the
weighted Fr\'{e}chet Mean (wFM) of the given data, where the weights makeup the
convolution mask, to be learned. Further, we prove that the proposed wFM layer
achieves a contraction mapping and hence ManifoldNet does not need the
non-linear ReLU unit used in standard CNNs. We present experiments, using the
ManifoldNet framework, to achieve dimensionality reduction by computing the
principal linear subspaces that naturally reside on a Grassmannian. The
experimental results demonstrate the efficacy of ManifoldNets in the context of
classification and reconstruction accuracy
Dilated Convolutional Neural Networks for Sequential Manifold-valued Data
Efforts are underway to study ways via which the power of deep neural
networks can be extended to non-standard data types such as structured data
(e.g., graphs) or manifold-valued data (e.g., unit vectors or special
matrices). Often, sizable empirical improvements are possible when the geometry
of such data spaces are incorporated into the design of the model,
architecture, and the algorithms. Motivated by neuroimaging applications, we
study formulations where the data are {\em sequential manifold-valued
measurements}. This case is common in brain imaging, where the samples
correspond to symmetric positive definite matrices or orientation distribution
functions. Instead of a recurrent model which poses computational/technical
issues, and inspired by recent results showing the viability of dilated
convolutional models for sequence prediction, we develop a dilated
convolutional neural network architecture for this task. On the technical side,
we show how the modules needed in our network can be derived while explicitly
taking the Riemannian manifold structure into account. We show how the
operations needed can leverage known results for calculating the weighted
Fr\'{e}chet Mean (wFM). Finally, we present scientific results for group
difference analysis in Alzheimer's disease (AD) where the groups are derived
using AD pathology load: here the model finds several brain fiber bundles that
are related to AD even when the subjects are all still cognitively healthy