3 research outputs found

    Sequence Prediction using Spectral RNNs

    Full text link
    Fourier methods have a long and proven track record as an excellent tool in data processing. As memory and computational constraints gain importance in embedded and mobile applications, we propose to combine Fourier methods and recurrent neural network architectures. The short-time Fourier transform allows us to efficiently process multiple samples at a time. Additionally, weight reductions trough low pass filtering is possible. We predict time series data drawn from the chaotic Mackey-Glass differential equation and real-world power load and motion capture data.Comment: Source code available at https://github.com/v0lta/Spectral-RN

    Deep Neural Object Analysis by Interactive Auditory Exploration with a Humanoid Robot

    Full text link
    We present a novel approach for interactive auditory object analysis with a humanoid robot. The robot elicits sensory information by physically shaking visually indistinguishable plastic capsules. It gathers the resulting audio signals from microphones that are embedded into the robotic ears. A neural network architecture learns from these signals to analyze properties of the contents of the containers. Specifically, we evaluate the material classification and weight prediction accuracy and demonstrate that the framework is fairly robust to acoustic real-world noise

    Scale-dependent Relationships in Natural Language

    Full text link
    Natural language exhibits statistical dependencies at a wide range of scales. For instance, the mutual information between words in natural language decays like a power law with the temporal lag between them. However, many statistical learning models applied to language impose a sampling scale while extracting statistical structure. For instance, Word2Vec constructs a vector embedding that maximizes the prediction between a target word and the context words that appear nearby in the corpus. The size of the context is chosen by the user and defines a strong scale; relationships over much larger temporal scales would be invisible to the algorithm. This paper examines the family of Word2Vec embeddings generated while systematically manipulating the sampling scale used to define the context around each word. The primary result is that different linguistic relationships are preferentially encoded at different scales. Different scales emphasize different syntactic and semantic relations between words.Moreover, the neighborhoods of a given word in the embeddings change significantly depending on the scale. These results suggest that any individual scale can only identify a subset of the meaningful relationships a word might have, and point toward the importance of developing scale-free models of semantic meaning
    corecore