4,267 research outputs found

    Neuromorphic Hardware In The Loop: Training a Deep Spiking Network on the BrainScaleS Wafer-Scale System

    Full text link
    Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10 000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.Comment: 8 pages, 10 figures, submitted to IJCNN 201

    Learning Probabilistic CP-nets from Observations of Optimal Items

    Get PDF
    International audienceModelling preferences has been an active research topic in Artificial Intelligence for more than fifteen years. Existing formalisms are rich and flexible enough to describe the behaviour of complex decision rules. However, for being interesting in practice, these formalisms must also permit fast elicitation of a user's preferences, involving a reasonable amount of interaction only. Therefore, it is interesting to learn not a single model, but a probabilistic model that can compactly represent the preferences of a group of users - this model can then be finely tuned to fit one particular user. Even in contexts where a user is not anonymous, her preferences are usually ill-known, because they can depend on the value of non controllable state variable. In such contexts, we would like to be able to answer questions like "What is the probability that o is preferred to o' by some (unknown) agent?", or "Which item is most likely to be the preferred one, given some constraints?". We study in this paper how Probabilistic Conditional Preference networks can be learnt, both in off-line and on-line settings. We suppose that we have a list of items which, it is assumed, are or have been optimal for some user or in some context. Such a list can be, for instance, a list of items that have been sold. We prove that such information is sufficient to learn a partial order over the set of possible items, when these have a combinatorial structure
    • …
    corecore