4,630 research outputs found

    Stochastic trapping in a solvable model of on-line independent component analysis

    Full text link
    Previous analytical studies of on-line Independent Component Analysis (ICA) learning rules have focussed on asymptotic stability and efficiency. In practice the transient stages of learning will often be more significant in determining the success of an algorithm. This is demonstrated here with an analysis of a Hebbian ICA algorithm which can find a small number of non-Gaussian components given data composed of a linear mixture of independent source signals. An idealised data model is considered in which the sources comprise a number of non-Gaussian and Gaussian sources and a solution to the dynamics is obtained in the limit where the number of Gaussian sources is infinite. Previous stability results are confirmed by expanding around optimal fixed points, where a closed form solution to the learning dynamics is obtained. However, stochastic effects are shown to stabilise otherwise unstable sub-optimal fixed points. Conditions required to destabilise one such fixed point are obtained for the case of a single non-Gaussian component, indicating that the initial learning rate \eta required to successfully escape is very low (\eta = O(N^{-2}) where N is the data dimension) resulting in very slow learning typically requiring O(N^3) iterations. Simulations confirm that this picture holds for a finite system.Comment: 17 pages, 3 figures. To appear in Neural Computatio

    Comparison between Oja's and BCM neural networks models in finding useful projections in high-dimensional spaces

    Get PDF
    This thesis presents the concept of a neural network starting from its corresponding biological model, paying particular attention to the learning algorithms proposed by Oja and Bienenstock Cooper & Munro. A brief introduction to Data Analysis is then performed, with particular reference to the Principal Components Analysis and Singular Value Decomposition. The two previously introduced algorithms are then dealt with more thoroughly, going to study in particular their connections with data analysis. Finally, it is proposed to use the Singular Value Decomposition as a method for obtaining stationary points in the BCM algorithm, in the case of linearly dependent inputs

    Of `Cocktail Parties' and Exoplanets

    Full text link
    The characterisation of ever smaller and fainter extrasolar planets requires an intricate understanding of one's data and the analysis techniques used. Correcting the raw data at the 10^-4 level of accuracy in flux is one of the central challenges. This can be difficult for instruments that do not feature a calibration plan for such high precision measurements. Here, it is not always obvious how to de-correlate the data using auxiliary information of the instrument and it becomes paramount to know how well one can disentangle instrument systematics from one's data, given nothing but the data itself. We propose a non-parametric machine learning algorithm, based on the concept of independent component analysis, to de-convolve the systematic noise and all non-Gaussian signals from the desired astrophysical signal. Such a `blind' signal de-mixing is commonly known as the `Cocktail Party problem' in signal-processing. Given multiple simultaneous observations of the same exoplanetary eclipse, as in the case of spectrophotometry, we show that we can often disentangle systematic noise from the original light curve signal without the use of any complementary information of the instrument. In this paper, we explore these signal extraction techniques using simulated data and two data sets observed with the Hubble-NICMOS instrument. Another important application is the de-correlation of the exoplanetary signal from time-correlated stellar variability. Using data obtained by the Kepler mission we show that the desired signal can be de-convolved from the stellar noise using a single time series spanning several eclipse events. Such non-parametric techniques can provide important confirmations of the existent parametric corrections reported in the literature, and their associated results. Additionally they can substantially improve the precision exoplanetary light curve analysis in the future.Comment: ApJ accepte

    Blind extraction of an exoplanetary spectrum through Independent Component Analysis

    Full text link
    Blind-source separation techniques are used to extract the transmission spectrum of the hot-Jupiter HD189733b recorded by the Hubble/NICMOS instrument. Such a 'blind' analysis of the data is based on the concept of independent component analysis. The de-trending of Hubble/NICMOS data using the sole assumption that nongaussian systematic noise is statistically independent from the desired light-curve signals is presented. By not assuming any prior, nor auxiliary information but the data themselves, it is shown that spectroscopic errors only about 10 - 30% larger than parametric methods can be obtained for 11 spectral bins with bin sizes of ~0.09 microns. This represents a reasonable trade-off between a higher degree of objectivity for the non-parametric methods and smaller standard errors for the parametric de-trending. Results are discussed in the light of previous analyses published in the literature. The fact that three very different analysis techniques yield comparable spectra is a strong indication of the stability of these results.Comment: ApJ accepte

    View-tolerant face recognition and Hebbian learning imply mirror-symmetric neural tuning to head orientation

    Get PDF
    The primate brain contains a hierarchy of visual areas, dubbed the ventral stream, which rapidly computes object representations that are both specific for object identity and relatively robust against identity-preserving transformations like depth-rotations. Current computational models of object recognition, including recent deep learning networks, generate these properties through a hierarchy of alternating selectivity-increasing filtering and tolerance-increasing pooling operations, similar to simple-complex cells operations. While simulations of these models recapitulate the ventral stream's progression from early view-specific to late view-tolerant representations, they fail to generate the most salient property of the intermediate representation for faces found in the brain: mirror-symmetric tuning of the neural population to head orientation. Here we prove that a class of hierarchical architectures and a broad set of biologically plausible learning rules can provide approximate invariance at the top level of the network. While most of the learning rules do not yield mirror-symmetry in the mid-level representations, we characterize a specific biologically-plausible Hebb-type learning rule that is guaranteed to generate mirror-symmetric tuning to faces tuning at intermediate levels of the architecture
    corecore