3,257 research outputs found

    Delay-Coordinates Embeddings as a Data Mining Tool for Denoising Speech Signals

    Full text link
    In this paper we utilize techniques from the theory of non-linear dynamical systems to define a notion of embedding threshold estimators. More specifically we use delay-coordinates embeddings of sets of coefficients of the measured signal (in some chosen frame) as a data mining tool to separate structures that are likely to be generated by signals belonging to some predetermined data set. We describe a particular variation of the embedding threshold estimator implemented in a windowed Fourier frame, and we apply it to speech signals heavily corrupted with the addition of several types of white noise. Our experimental work seems to suggest that, after training on the data sets of interest,these estimators perform well for a variety of white noise processes and noise intensity levels. The method is compared, for the case of Gaussian white noise, to a block thresholding estimator

    Representation Learning: A Review and New Perspectives

    Full text link
    The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, auto-encoders, manifold learning, and deep networks. This motivates longer-term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation and manifold learning

    Neural Nearest Neighbors Networks

    Full text link
    Non-local methods exploiting the self-similarity of natural signals have been well studied, for example in image analysis and restoration. Existing approaches, however, rely on k-nearest neighbors (KNN) matching in a fixed feature space. The main hurdle in optimizing this feature space w.r.t. application performance is the non-differentiability of the KNN selection rule. To overcome this, we propose a continuous deterministic relaxation of KNN selection that maintains differentiability w.r.t. pairwise distances, but retains the original KNN as the limit of a temperature parameter approaching zero. To exploit our relaxation, we propose the neural nearest neighbors block (N3 block), a novel non-local processing layer that leverages the principle of self-similarity and can be used as building block in modern neural network architectures. We show its effectiveness for the set reasoning task of correspondence classification as well as for image restoration, including image denoising and single image super-resolution, where we outperform strong convolutional neural network (CNN) baselines and recent non-local models that rely on KNN selection in hand-chosen features spaces.Comment: to appear at NIPS*2018, code available at https://github.com/visinf/n3net
    • …
    corecore