160,227 research outputs found
SPLICE : Fully tractable hierarchical extension of ICA with pooling
We present a novel probabilistic framework for a hierarchical extension of independent component analysis (ICA), with a particular motivation in neuroscientific data analysis and modeling. The framework incorporates a general sub-space pooling with linear ICA-like layers stacked recursively. Unlike related previous models, our generative model is fully tractable: both the likelihood and the posterior estimates of latent variables can readily be computed with analytically simple formulae. The model is particularly simple in the case of complex-valued data since the pooling can be reduced to taking the modulus of complex numbers. Experiments on elec-troencephalography (EEG) and natural images demonstrate the validity of the method. Copyright 2017 by the author(s).Peer reviewe
Emerging Consciousness as a Result of Complex-Dynamical Interaction Process
A quite general interaction process within a multi-component system is analysed by the extended effective potential method, liberated from usual limitations of perturbation theory or integrable model. The obtained causally complete solution of the many-body problem reveals the phenomenon of dynamic multivaluedness, or redundance, of emerging, incompatible system realisations and dynamic entanglement of system components within each realisation. The ensuing concept of dynamic complexity (and related intrinsic chaoticity) is absolutely universal and can be applied to the problem of consciousness that emerges now as a high enough, properly specified level of unreduced complexity of a suitable interaction process. This complexity level can be identified with the appearance of bound, permanently localised states in the multivalued brain dynamics from strongly chaotic states of unconscious intelligence, by analogy with classical behaviour emergence from quantum states at much lower levels of world dynamics. We show that the main properties of this dynamically emerging consciousness (and intelligence, at the preceding complexity level) correspond to empirically derived properties of natural versions and obtain causally substantiated conclusions about their artificial realisation, including the fundamentally justified paradigm of genuine machine consciousness. This rigorously defined machine consciousness is different from both natural consciousness and any mechanistic, dynamically single-valued imitation of the latter. We use then the same, truly universal concept of complexity to derive equally rigorous conclusions about mental and social implications of the machine consciousness paradigm, demonstrating its indispensable role in the next stage of civilisation development
Slow feature analysis yields a rich repertoire of complex cell properties
In this study, we investigate temporal slowness as a learning principle for receptive fields using slow feature analysis, a new algorithm to determine functions that extract slowly varying signals from the input data.
We find that the learned functions trained on image sequences develop many properties found also experimentally in complex cells of primary visual cortex, such as direction selectivity, non-orthogonal inhibition, end-inhibition and side-inhibition.
Our results demonstrate that a single unsupervised learning principle can account for such a rich repertoire of receptive field properties
Tensor Regression with Applications in Neuroimaging Data Analysis
Classical regression methods treat covariates as a vector and estimate a
corresponding vector of regression coefficients. Modern applications in medical
imaging generate covariates of more complex form such as multidimensional
arrays (tensors). Traditional statistical and computational methods are proving
insufficient for analysis of these high-throughput data due to their ultrahigh
dimensionality as well as complex structure. In this article, we propose a new
family of tensor regression models that efficiently exploit the special
structure of tensor covariates. Under this framework, ultrahigh dimensionality
is reduced to a manageable level, resulting in efficient estimation and
prediction. A fast and highly scalable estimation algorithm is proposed for
maximum likelihood estimation and its associated asymptotic properties are
studied. Effectiveness of the new methods is demonstrated on both synthetic and
real MRI imaging data.Comment: 27 pages, 4 figure
Fourier Optics approach to imaging with sub-wavelength resolution through metal-dielectric multilayers
Metal-dielectric layered stacks for imaging with sub-wavelength resolution
are regarded as linear isoplanatic systems - a concept popular in Fourier
Optics and in scalar diffraction theory. In this context, a layered flat lens
is a one-dimensional spatial filter characterised by the point spread function.
However, depending on the model of the source, the definition of the point
spread function for multilayers with sub-wavelength resolution may be
formulated in several ways. Here, a distinction is made between a soft source
and hard electric or magnetic sources. Each of these definitions leads to a
different meaning of perfect imaging. It is shown that some simple
interpretations of the PSF, such as the relation of its width to the resolution
of the imaging system are ambiguous for the multilayers with sub-wavelenth
resolution. These differences must be observed in point spread function
engineering of layered systems with sub-wavelength sized PSF
Tensor Decompositions for Signal Processing Applications From Two-way to Multiway Component Analysis
The widespread use of multi-sensor technology and the emergence of big
datasets has highlighted the limitations of standard flat-view matrix models
and the necessity to move towards more versatile data analysis tools. We show
that higher-order tensors (i.e., multiway arrays) enable such a fundamental
paradigm shift towards models that are essentially polynomial and whose
uniqueness, unlike the matrix methods, is guaranteed under verymild and natural
conditions. Benefiting fromthe power ofmultilinear algebra as theirmathematical
backbone, data analysis techniques using tensor decompositions are shown to
have great flexibility in the choice of constraints that match data properties,
and to find more general latent components in the data than matrix-based
methods. A comprehensive introduction to tensor decompositions is provided from
a signal processing perspective, starting from the algebraic foundations, via
basic Canonical Polyadic and Tucker models, through to advanced cause-effect
and multi-view data analysis schemes. We show that tensor decompositions enable
natural generalizations of some commonly used signal processing paradigms, such
as canonical correlation and subspace techniques, signal separation, linear
regression, feature extraction and classification. We also cover computational
aspects, and point out how ideas from compressed sensing and scientific
computing may be used for addressing the otherwise unmanageable storage and
manipulation problems associated with big datasets. The concepts are supported
by illustrative real world case studies illuminating the benefits of the tensor
framework, as efficient and promising tools for modern signal processing, data
analysis and machine learning applications; these benefits also extend to
vector/matrix data through tensorization. Keywords: ICA, NMF, CPD, Tucker
decomposition, HOSVD, tensor networks, Tensor Train
- âŠ