166,337 research outputs found
Representation of Functional Data in Neural Networks
Functional Data Analysis (FDA) is an extension of traditional data analysis
to functional data, for example spectra, temporal series, spatio-temporal
images, gesture recognition data, etc. Functional data are rarely known in
practice; usually a regular or irregular sampling is known. For this reason,
some processing is needed in order to benefit from the smooth character of
functional data in the analysis methods. This paper shows how to extend the
Radial-Basis Function Networks (RBFN) and Multi-Layer Perceptron (MLP) models
to functional data inputs, in particular when the latter are known through
lists of input-output pairs. Various possibilities for functional processing
are discussed, including the projection on smooth bases, Functional Principal
Component Analysis, functional centering and reduction, and the use of
differential operators. It is shown how to incorporate these functional
processing into the RBFN and MLP models. The functional approach is illustrated
on a benchmark of spectrometric data analysis.Comment: Also available online from:
http://www.sciencedirect.com/science/journal/0925231
Towards glass-box CNNs
With the substantial performance of neural networks in sensitive fields
increases the need for interpretable deep learning models. Major challenge is
to uncover the multiscale and distributed representation hidden inside the
basket mappings of the deep neural networks. Researchers have been trying to
comprehend it through visual analysis of features, mathematical structures, or
other data-driven approaches. Here, we work on implementation invariances of
CNN-based representations and present an analytical binary prototype that
provides useful insights for large scale real-life applications. We begin by
unfolding conventional CNN and then repack it with a more transparent
representation. Inspired by the attainment of neural networks, we choose to
present our findings as a three-layer model. First is a representation layer
that encompasses both the class information (group invariant) and symmetric
transformations (group equivariant) of input images. Through these
transformations, we decrease intra-class distance and increase the inter-class
distance. It is then passed through a dimension reduction layer followed by a
classifier. The proposed representation is compared with the equivariance of
AlexNet (CNN) internal representation for better dissemination of simulation
results. We foresee following immediate advantages of this toy version: i)
contributes pre-processing of data to increase the feature or class
separability in large scale problems, ii) helps designing neural architecture
to improve the classification performance in multi-class problems, and iii)
helps building interpretable CNN through scalable functional blocks
Neural Functional Transformers
The recent success of neural networks as implicit representation of data has
driven growing interest in neural functionals: models that can process other
neural networks as input by operating directly over their weight spaces.
Nevertheless, constructing expressive and efficient neural functional
architectures that can handle high-dimensional weight-space objects remains
challenging. This paper uses the attention mechanism to define a novel set of
permutation equivariant weight-space layers and composes them into deep
equivariant models called neural functional Transformers (NFTs). NFTs respect
weight-space permutation symmetries while incorporating the advantages of
attention, which have exhibited remarkable success across multiple domains. In
experiments processing the weights of feedforward MLPs and CNNs, we find that
NFTs match or exceed the performance of prior weight-space methods. We also
leverage NFTs to develop Inr2Array, a novel method for computing permutation
invariant latent representations from the weights of implicit neural
representations (INRs). Our proposed method improves INR classification
accuracy by up to over existing methods. We provide an implementation
of our layers at https://github.com/AllanYangZhou/nfn
Geographic Location Encoding with Spherical Harmonics and Sinusoidal Representation Networks
Learning feature representations of geographical space is vital for any
machine learning model that integrates geolocated data, spanning application
domains such as remote sensing, ecology, or epidemiology. Recent work mostly
embeds coordinates using sine and cosine projections based on Double Fourier
Sphere (DFS) features -- these embeddings assume a rectangular data domain even
on global data, which can lead to artifacts, especially at the poles. At the
same time, relatively little attention has been paid to the exact design of the
neural network architectures these functional embeddings are combined with.
This work proposes a novel location encoder for globally distributed geographic
data that combines spherical harmonic basis functions, natively defined on
spherical surfaces, with sinusoidal representation networks (SirenNets) that
can be interpreted as learned Double Fourier Sphere embedding. We
systematically evaluate the cross-product of positional embeddings and neural
network architectures across various classification and regression benchmarks
and synthetic evaluation datasets. In contrast to previous approaches that
require the combination of both positional encoding and neural networks to
learn meaningful representations, we show that both spherical harmonics and
sinusoidal representation networks are competitive on their own but set
state-of-the-art performances across tasks when combined. We provide source
code at www.github.com/marccoru/locationencode
Intrinsic functional network contributions to the relationship between trait empathy and subjective happiness
幸福感と共感性を関連付ける安静時脳機能ネットワークの解明 --前頭前皮質の機能的結合性の役割--. 京都大学プレスリリース. 2021-01-08.Subjective happiness (well-being) is a multi-dimensional construct indexing one's evaluations of everyday emotional experiences and life satisfaction, and has been associated with different aspects of trait empathy. Despite previous research identifying the neural substrates of subjective happiness and empathy, the mechanisms mediating the relationship between the two constructs remain largely unclear. Here, we performed a data-driven, multi-voxel pattern analysis of whole-brain intrinsic functional connectivity to reveal the neural mechanisms of subjective happiness and trait empathy in a sample of young females. Behaviorally, we found that subjective happiness was negatively associated with personal distress (i.e., self-referential experience of others’ feelings). Consistent with this inverse relationship, subjective happiness was associated with the dorsolateral prefrontal cortex exhibiting decreased functional connectivity with regions important for the representation of unimodal sensorimotor information (e.g., primary sensory cortices) or multi-modal summaries of brain states (e.g., default mode network) and increased functional connectivity with regions important for the attentional modulation of these representations (e.g., frontoparietal, attention networks). Personal distress was associated with the medial prefrontal cortex exhibiting functional connectivity differences with similar networks––but in the opposite direction. Finally, intrinsic functional connectivity within and between these networks fully mediated the relationship between the two behavioral measures. These results identify an important contribution of the macroscale functional organization of the brain to human well-being, by demonstrating that lower levels of personal distress lead to higher subjective happiness through variation in intrinsic functional connectivity along a neural representation vs. modulation gradient
Joint Group Invariant Functions on Data-Parameter Domain Induce Universal Neural Networks
The symmetry and geometry of input data are considered to be encoded in the
internal data representation inside the neural network, but the specific
encoding rule has been less investigated. In this study, we present a
systematic method to induce a generalized neural network and its right inverse
operator, called the ridgelet transform, from a joint group invariant function
on the data-parameter domain. Since the ridgelet transform is an inverse, (1)
it can describe the arrangement of parameters for the network to represent a
target function, which is understood as the encoding rule, and (2) it implies
the universality of the network. Based on the group representation theory, we
present a new simple proof of the universality by using Schur's lemma in a
unified manner covering a wide class of networks, for example, the original
ridgelet transform, formal deep networks, and the dual voice transform. Since
traditional universality theorems were demonstrated based on functional
analysis, this study sheds light on the group theoretic aspect of the
approximation theory, connecting geometric deep learning to abstract harmonic
analysis.Comment: NeurReps 202
Preserving Differential Privacy in Convolutional Deep Belief Networks
The remarkable development of deep learning in medicine and healthcare domain
presents obvious privacy issues, when deep neural networks are built on users'
personal and highly sensitive data, e.g., clinical records, user profiles,
biomedical images, etc. However, only a few scientific studies on preserving
privacy in deep learning have been conducted. In this paper, we focus on
developing a private convolutional deep belief network (pCDBN), which
essentially is a convolutional deep belief network (CDBN) under differential
privacy. Our main idea of enforcing epsilon-differential privacy is to leverage
the functional mechanism to perturb the energy-based objective functions of
traditional CDBNs, rather than their results. One key contribution of this work
is that we propose the use of Chebyshev expansion to derive the approximate
polynomial representation of objective functions. Our theoretical analysis
shows that we can further derive the sensitivity and error bounds of the
approximate polynomial representation. As a result, preserving differential
privacy in CDBNs is feasible. We applied our model in a health social network,
i.e., YesiWell data, and in a handwriting digit dataset, i.e., MNIST data, for
human behavior prediction, human behavior classification, and handwriting digit
recognition tasks. Theoretical analysis and rigorous experimental evaluations
show that the pCDBN is highly effective. It significantly outperforms existing
solutions
- …