1,699 research outputs found

    Fourier PCA and Robust Tensor Decomposition

    Full text link
    Fourier PCA is Principal Component Analysis of a matrix obtained from higher order derivatives of the logarithm of the Fourier transform of a distribution.We make this method algorithmic by developing a tensor decomposition method for a pair of tensors sharing the same vectors in rank-11 decompositions. Our main application is the first provably polynomial-time algorithm for underdetermined ICA, i.e., learning an n×mn \times m matrix AA from observations y=Axy=Ax where xx is drawn from an unknown product distribution with arbitrary non-Gaussian components. The number of component distributions mm can be arbitrarily higher than the dimension nn and the columns of AA only need to satisfy a natural and efficiently verifiable nondegeneracy condition. As a second application, we give an alternative algorithm for learning mixtures of spherical Gaussians with linearly independent means. These results also hold in the presence of Gaussian noise.Comment: Extensively revised; details added; minor errors corrected; exposition improve

    Long Wavelength VCSELs and VCSEL-Based Processing of Microwave Signals

    Get PDF
    We address the challenge of decreasing the size, cost and power consumption for practical applications of next generation microwave photonics systems by using long-wavelength vertical cavity surface emitting lasers. Several demonstrations of new concepts of microwave photonics devices are presented and discussed

    Nonparametric Regression on a Graph

    Get PDF
    The 'Signal plus Noise' model for nonparametric regression can be extended to the case of observations taken at the vertices of a graph. This model includes many familiar regression problems. This article discusses the use of the edges of a graph to measure roughness in penalized regression. Distance between estimate and observation is measured at every vertex in the L2 norm, and roughness is penalized on every edge in the L1 norm. Thus the ideas of total variation penalization can be extended to a graph. The resulting minimization problem presents special computational challenges, so we describe a new and fast algorithm and demonstrate its use with examples. The examples include image analysis, a simulation applicable to discrete spatial variation, and classification. In our examples, penalized regression improves upon kernel smoothing in terms of identifying local extreme values on planar graphs. In all examples we use fully automatic procedures for setting the smoothing parameters. Supplemental materials are available online. © 2011 American Statistical Association

    Transverse instabilities of multiple vortex chains in superconductor-ferromagnet bilayers

    Full text link
    Using scanning tunneling microscopy and Ginzburg-Landau simulations we explore vortex configurations in magnetically coupled NbSe2_2-Permalloy superconductor-ferromagnet bilayer. The Permalloy film with stripe domain structure induces periodic local magnetic induction in the superconductor creating a series of pinning-antipinning channels for externally added magnetic flux quanta. Such laterally confined Abrikosov vortices form quasi-1D arrays (chains). The transitions between multichain states occur through propagation of kinks at the intermediate fields. At high fields we show that the system becomes non-linear due to a change in both the number of vortices and the confining potential. The longitudinal instabilities of the resulting vortex structures lead to vortices `levitating' in the anti-pinning channels.Comment: accepted in PRB-Rapid

    Smoothed Analysis of Tensor Decompositions

    Full text link
    Low rank tensor decompositions are a powerful tool for learning generative models, and uniqueness results give them a significant advantage over matrix decomposition methods. However, tensors pose significant algorithmic challenges and tensors analogs of much of the matrix algebra toolkit are unlikely to exist because of hardness results. Efficient decomposition in the overcomplete case (where rank exceeds dimension) is particularly challenging. We introduce a smoothed analysis model for studying these questions and develop an efficient algorithm for tensor decomposition in the highly overcomplete case (rank polynomial in the dimension). In this setting, we show that our algorithm is robust to inverse polynomial error -- a crucial property for applications in learning since we are only allowed a polynomial number of samples. While algorithms are known for exact tensor decomposition in some overcomplete settings, our main contribution is in analyzing their stability in the framework of smoothed analysis. Our main technical contribution is to show that tensor products of perturbed vectors are linearly independent in a robust sense (i.e. the associated matrix has singular values that are at least an inverse polynomial). This key result paves the way for applying tensor methods to learning problems in the smoothed setting. In particular, we use it to obtain results for learning multi-view models and mixtures of axis-aligned Gaussians where there are many more "components" than dimensions. The assumption here is that the model is not adversarially chosen, formalized by a perturbation of model parameters. We believe this an appealing way to analyze realistic instances of learning problems, since this framework allows us to overcome many of the usual limitations of using tensor methods.Comment: 32 pages (including appendix

    Aligning Manifolds of Double Pendulum Dynamics Under the Influence of Noise

    Full text link
    This study presents the results of a series of simulation experiments that evaluate and compare four different manifold alignment methods under the influence of noise. The data was created by simulating the dynamics of two slightly different double pendulums in three-dimensional space. The method of semi-supervised feature-level manifold alignment using global distance resulted in the most convincing visualisations. However, the semi-supervised feature-level local alignment methods resulted in smaller alignment errors. These local alignment methods were also more robust to noise and faster than the other methods.Comment: The final version will appear in ICONIP 2018. A DOI identifier to the final version will be added to the preprint, as soon as it is availabl

    Non-Redundant Spectral Dimensionality Reduction

    Full text link
    Spectral dimensionality reduction algorithms are widely used in numerous domains, including for recognition, segmentation, tracking and visualization. However, despite their popularity, these algorithms suffer from a major limitation known as the "repeated Eigen-directions" phenomenon. That is, many of the embedding coordinates they produce typically capture the same direction along the data manifold. This leads to redundant and inefficient representations that do not reveal the true intrinsic dimensionality of the data. In this paper, we propose a general method for avoiding redundancy in spectral algorithms. Our approach relies on replacing the orthogonality constraints underlying those methods by unpredictability constraints. Specifically, we require that each embedding coordinate be unpredictable (in the statistical sense) from all previous ones. We prove that these constraints necessarily prevent redundancy, and provide a simple technique to incorporate them into existing methods. As we illustrate on challenging high-dimensional scenarios, our approach produces significantly more informative and compact representations, which improve visualization and classification tasks

    Association of Aciculin with Dystrophin and Utrophin

    Get PDF
    Aciculin is a recently identified 60-kDa cytoskeletal protein, highly homologous to the glycolytic enzyme phosphoglucomutase type 1, (Belkin, A. M., Klimanskaya, I. V., Lukashev, M. E., Lilley, K., Critchley, D., and Koteliansky, V. E. (1994) J. Cell Sci. 107, 159-173). Aciculin expression in skeletal muscle is developmentally regulated, and this protein is particularly enriched at cell-matrix adherens junctions of muscle cells (Belkin, A. M., and Burridge, K. (1994) J. Cell Sci. 107, 1993-2003). The purpose of our study was to identify cytoskeletal protein(s) interacting with aciculin in various cell types. Using immunoprecipitation from cell lysates of metabolically labeled differentiating C2C12 muscle cells with anti-aciculin-specific antibodies, we detected a high molecular weight band (M(r) approximately 400,000), consistently coprecipitating with aciculin. We showed that this 400 kDa band comigrated with dystrophin and immunoblotted with anti-dystrophin antibodies. The association between aciculin and dystrophin in C2C12 cells was shown to resist Triton X-100 extraction and the majority of the complex could be extracted only in the presence of ionic detergents. In the reverse immunoprecipitation experiments, aciculin was detected in the precipitates with different anti-dystrophin antibodies. Immunodepletion experiments with lysates of metabolically labeled C2C12 myotubes showed that aciculin is a major dystrophin-associated protein in cultured skeletal muscle cells. Double immunostaining of differentiating and mature C2C12 myotubes with antibodies against aciculin and dystrophin revealed precise colocalization of these two cytoskeletal proteins throughout the process of myodifferentiation in culture. In skeletal muscle tissue, both proteins are concentrated at the sarcolemma and at myotendinous junctions. In contrast, utrophin, an autosomal homologue of dystrophin, was not codistributed with aciculin in muscle cell cultures and in skeletal muscle tissues. Analytical gel filtration experiments with purified aciculin and dystrophin showed interaction of these proteins in vitro, indicating that their association in skeletal muscle is due to direct binding. Whereas dystrophin was shown to be a major aciculin-associated protein in skeletal muscle, immunoblotting of anti-aciculin immunoprecipitates with antibodies against utrophin showed that aciculin is associated with utrophin in cultured A7r5 smooth muscle cells and REF52 fibroblasts. Immunodepletion experiments performed with lysates of metabolically labeled A7r5 cells demonstrated that aciculin is a major utrophin-binding protein in this cell type. Taken together, our data show that aciculin is a novel dystrophin- and utrophin-binding protein. Association of aciculin with dystrophin (utrophin) in various cell types might provide an additional cytoskeletal-matrix transmembrane link at sites where actin filaments terminate at the plasma membrane
    corecore