15 research outputs found
Motion Invariance in Visual Environments
The puzzle of computer vision might find new challenging solutions when we
realize that most successful methods are working at image level, which is
remarkably more difficult than processing directly visual streams, just as
happens in nature. In this paper, we claim that their processing naturally
leads to formulate the motion invariance principle, which enables the
construction of a new theory of visual learning based on convolutional
features. The theory addresses a number of intriguing questions that arise in
natural vision, and offers a well-posed computational scheme for the discovery
of convolutional filters over the retina. They are driven by the Euler-Lagrange
differential equations derived from the principle of least cognitive action,
that parallels laws of mechanics. Unlike traditional convolutional networks,
which need massive supervision, the proposed theory offers a truly new scenario
in which feature learning takes place by unsupervised processing of video
signals. An experimental report of the theory is presented where we show that
features extracted under motion invariance yield an improvement that can be
assessed by measuring information-based indexes.Comment: arXiv admin note: substantial text overlap with arXiv:1801.0711
Backprop Diffusion is Biologically Plausible
The Backpropagation algorithm relies on the abstraction of using a neural
model that gets rid of the notion of time, since the input is mapped
instantaneously to the output. In this paper, we claim that this abstraction of
ignoring time, along with the abrupt input changes that occur when feeding the
training set, are in fact the reasons why, in some papers, Backprop biological
plausibility is regarded as an arguable issue. We show that as soon as a deep
feedforward network operates with neurons with time-delayed response, the
backprop weight update turns out to be the basic equation of a biologically
plausible diffusion process based on forward-backward waves. We also show that
such a process very well approximates the gradient for inputs that are not too
fast with respect to the depth of the network. These remarks somewhat disclose
the diffusion process behind the backprop equation and leads us to interpret
the corresponding algorithm as a degeneration of a more general diffusion
process that takes place also in neural networks with cyclic connections.Comment: 9 pages, 3 figures. arXiv admin note: text overlap with
arXiv:1907.0510