18 research outputs found
Transformation Equivariant Boltzmann Machines
Abstract. We develop a novel modeling framework for Boltzmann machines, augmenting each hidden unit with a latent transformation assignment variable which describes the selection of the transformed view of the canonical connection weights associated with the unit. This enables the inferences of the model to transform in response to transformed input data in a stable and predictable way, and avoids learning multiple features differing only with respect to the set of transformations. Extending prior work on translation equivariant (convolutional) models, we develop translation and rotation equivariant restricted Boltzmann machines (RBMs) and deep belief nets (DBNs), and demonstrate their effectiveness in learning frequently occurring statistical structure from artificial and natural images
The economic and accounting content of fixed assets
This book presents a mathematical methodology for image analysis tasks at the edge of current research, including anisotropic diffusion filtering of tensor fields. Instead of specific applications, it explores methodological structures on which they are built.DIPLECS, GARNICS, NACI
Dynamic pursuit with a bio-inspired neural model
In this paper we present a bio-inspired connectionist model for visual perception of motion and its pursuit. It is organized in three stages: a causal spatio-temporal filtering of Gabor-like type, an antagonist inhibition mechanism and a densely interconnected neural population. These stages are inspired by the neural treatment and the interactions of the primary visual cortex, middle temporal area and superior visual areas. This model has been evaluated on natural image sequences
Fractionally predictive spiking neurons
Recent experimental work has suggested that the neural firing rate can be interpreted as a fractional derivative, at least when signal variation induces neural adaptation. Here, we show that the actual neural spike-train itself can be considered as the fractional derivative, provided that the neural signal is approximated by a sum of power-law kernels. A simple standard thresholding spiking neuron suffices to carry out such an approximation, given a suitable refractory response. Empirically, we find that the online approximation of signals with a sum of power-law kernels is beneficial for encoding signals with slowly varying components, like long-memory self-similar signals. For such signals, the online power-law kernel approximation typically required less than half the number of spikes for similar SNR as compared to sums of similar but exponentially decaying kernels. As power-law kernels can be accurately approximated using sums or cascades of weighted exponentials, we demonstrate that the corresponding decoding of spike-trains by a receiving neuron allows for natural and transparent temporal signal filtering by tuning the weights of the decoding kernel
Latency-Based Probabilistic Information Processing in Recurrent Neural Hierarchies
International audienceIn this article, we present an original neural space/latency code, integrated in a multi-layered neural hierarchy, that offers a new perspective on probabilistic inference operations. Our work is based on the dynamic neural field paradigm that leads to the emergence of activity bumps, based on recurrent lateral interactions, thus providing a spatial coding of information. We propose that lateral connections represent a data model, i.e., the conditional probability of a "true" stimulus given a noisy input. We propose furthermore that the resulting attractor state encodes the most likely "true" stimulus given the data model, and that its latency expresses the confidence in this interpretation. Thus, the main feature of this network is its ability to represent, transmit and integrate probabilistic information at multiple levels so that to take near-optimal decisions when inputs are contradictory, noisy or missing. We illustrate these properties on a three-layered neural hierarchy receiving inputs from a simplified robotic object recognition task. We also compare the network dynamics to an explicit probabilistic model of the task, to verify that it indeed reproduces all relevant properties of probabilistic processing
Channel Coding for Joint Colour and Depth Segmentation
Segmentation is an important preprocessing step in many applications. Compared to colour segmentation, fusion of colour and depth greatly improves the segmentation result. Such a fusion is easy to do by stacking measurements in different value dimensions, but there are better ways. In this paper we perform fusion using the channel representation, and demonstrate how a state-of-the-art segmentation algorithm can be modified to use channel values as inputs. We evaluate segmentation results on data collected using the Microsoft Kinect peripheral for Xbox 360, using the superparamagnetic clustering algorithm. Our experiments show that depth gradients are more useful than depth values for segmentation, and that channel coding both colour and depth gradients makes tuned parameter settings generalise better to novel images
Recommended from our members
EFFICACY OF HEPATITIS B IMMUNE SERUM GLOBULIN AFTER ACCIDENTAL EXPOSURE: Preliminary Report of the Veterans Administration Cooperative Study
A randomised, double-blind, controlled trial has been undertaken to compare the efficacy of hepatitis B immune globulin (H.B.I.G.) with that of immune serum globulin (I.S.G.) for the prophylaxis of viral hepatitis. Participants in the trial were individuals exposed accidentally to material infectious for hepatitis (primarily viral B hepatitis). Preliminary evaluation of the first 302 of the 561 individuals entered into the study indicates that H.B.I.G. significantly reduced the frequencies of both clinical and sub-clinical hepatitis during the first 3-4 months after the injection. Less than 10% of H.B.I.G. recipients had detectable anti-HBs at the sixth month after the injection, suggesting that H.B.I.G. might need to be given every 3-4 months to continually exposed individuals. Further long-term evaluation is required in order to define more clearly those most likely to benefit from H.B.I.G.