366 research outputs found

    The body as a reservoir: locomotion and sensing with linear feedback

    Get PDF
    It is known that mass-spring nets have computational power and can be trained to reproduce oscillating patterns. In this work, we extend this idea to locomotion and sensing. We simulate systems made out of bars and springs and show that stable gaits can be maintained by these structures with only linear feedback. We then conduct a classification experiment in which the system has to distinguish terrains while maintaining an oscillatory pattern. These experiments indicate that the control of compliant robots can be simplified if one exploits the computational power of the body’s dynamics

    Learning content-based metrics for music similarity

    Get PDF
    In this abstract, we propose a method to learn application-specific content-based metrics for music similarity using unsupervised feature learning and neighborhood components analysis. Multiple-timescale features extracted from music audio are embedded into a Euclidean metric space, so that the distance between songs reflects their similarity. We evaluated the method on the GTZAN and Magnatagatune datasets

    Memory in reservoirs for high dimensional input

    Get PDF
    Reservoir Computing (RC) is a recently introduced scheme to employ recurrent neural networks while circumventing the difficulties that typically appear when training the recurrent weights. The ‘reservoir’ is a fixed randomly initiated recurrent network which receives input via a random mapping. Only an instantaneous linear mapping from the network to the output is trained which can be done with linear regression. In this paper we study dynamical properties of reservoirs receiving a high number of inputs. More specifically, we investigate how the internal state of the network retains fading memory of its input signal. Memory properties for random recurrent networks have been thoroughly examined in past research, but only for one-dimensional input. Here we take into account statistics which will typically occur in high dimensional signals. We find useful empirical data which expresses how memory in recurrent networks is distributed over the individual principal components of the input

    Building a patient-specific seizure detector without expert input using user triggered active learning strategies

    Get PDF
    Purpose: Patient-specific seizure detectors outperform general seizure detectors, but building them requires lots of consistently marked electroencephalogram (EEG) of a single patient, which is expensive to gather. This work presents a method to bring general seizure detectors up to par with patient-specific seizure detectors without expert input. The user/patient is only required to push a button in case of a false alarm and/or missed seizure. Method: For the experiments the 'CHB-MIT Scalp EEG Database' was used, which contains pre-surgically recorded EEG of 24 patients. The seizure detector used is based on (Buteneers et al. Epilepsy Research 2012:(in press)) combined with the preprocessing technique presented in (Shoeb et al. Epilepsy & Behavior 2004;5:483-598). Button presses mark the corresponding data and add it to the training set of the system. The performance is evaluated using leave-one-hour-out cross-validation to attain statistically relevant results. Results: For the patient-specific seizure detector 34(32)% (average(standard deviation)) of the detections are false, 8(14)% of the seizures are missed and a detection delay of 11(10)s is reached. The general seizure detector achieves: 86(89)%, 28(41)% and -35(82)s, respectively. Adding only false positives, the patient specific performance is achieved in 9 of the 24 patients. Adding missed seizures allows the patient-specific performance to be reached in 21 patients (about 90%). Conclusion: This work shows that in order to build a patient-specific seizure detector, no patient-specific EEG data is required for up to 90% of the patients using the presented technique

    Towards a neural hierarchy of time scales for motor control

    Get PDF
    Animals show remarkable rich motion skills which are still far from realizable with robots. Inspired by the neural circuits which generate rhythmic motion patterns in the spinal cord of all vertebrates, one main research direction points towards the use of central pattern generators in robots. On of the key advantages of this, is that the dimensionality of the control problem is reduced. In this work we investigate this further by introducing a multi-timescale control hierarchy with at its core a hierarchy of recurrent neural networks. By means of some robot experiments, we demonstrate that this hierarchy can embed any rhythmic motor signal by imitation learning. Furthermore, the proposed hierarchy allows the tracking of several high level motion properties (e.g.: amplitude and offset), which are usually observed at a slower rate than the generated motion. Although these experiments are preliminary, the results are promising and have the potential to open the door for rich motor skills and advanced control

    Feedback control by online learning an inverse model

    Get PDF
    A model, predictor, or error estimator is often used by a feedback controller to control a plant. Creating such a model is difficult when the plant exhibits nonlinear behavior. In this paper, a novel online learning control framework is proposed that does not require explicit knowledge about the plant. This framework uses two learning modules, one for creating an inverse model, and the other for actually controlling the plant. Except for their inputs, they are identical. The inverse model learns by the exploration performed by the not yet fully trained controller, while the actual controller is based on the currently learned model. The proposed framework allows fast online learning of an accurate controller. The controller can be applied on a broad range of tasks with different dynamic characteristics. We validate this claim by applying our control framework on several control tasks: 1) the heating tank problem (slow nonlinear dynamics); 2) flight pitch control (slow linear dynamics); and 3) the balancing problem of a double inverted pendulum (fast linear and nonlinear dynamics). The results of these experiments show that fast learning and accurate control can be achieved. Furthermore, a comparison is made with some classical control approaches, and observations concerning convergence and stability are made

    Accelerating sparse restricted Boltzmann machine training using non-Gaussianity measures

    Get PDF
    In recent years, sparse restricted Boltzmann machines have gained popularity as unsupervised feature extractors. Starting from the observation that their training process is biphasic, we investigate how it can be accelerated: by determining when it can be stopped based on the non-Gaussianity of the distribution of the model parameters, and by increasing the learning rate when the learnt filters have locked on to their preferred configurations. We evaluated our approach on the CIFAR-10, NORB and GTZAN datasets

    Audio-based music classification with a pretrained convolutional network

    Get PDF
    Recently the ‘Million Song Dataset’, containing audio features and metadata for one million songs, was made available. In this paper, we build a convolutional network that is then trained to perform artist recognition, genre recognition and key detection. The network is tailored to summarize the audio features over musically significant timescales. It is infeasible to train the network on all available data in a supervised fashion, so we use unsupervised pretraining to be able to harness the entire dataset: we train a convolutional deep belief network on all data, and then use the learnt parameters to initialize a convolutional multilayer perceptron with the same architecture. The MLP is then trained on a labeled subset of the data for each task. We also train the same MLP with randomly initialized weights. We find that our convolutional approach improves accuracy for the genre recognition and artist recognition tasks. Unsupervised pretraining improves convergence speed in all cases. For artist recognition it improves accuracy as well
    corecore