140 research outputs found
Integer Echo State Networks: Hyperdimensional Reservoir Computing
We propose an approximation of Echo State Networks (ESN) that can be
efficiently implemented on digital hardware based on the mathematics of
hyperdimensional computing. The reservoir of the proposed Integer Echo State
Network (intESN) is a vector containing only n-bits integers (where n<8 is
normally sufficient for a satisfactory performance). The recurrent matrix
multiplication is replaced with an efficient cyclic shift operation. The intESN
architecture is verified with typical tasks in reservoir computing: memorizing
of a sequence of inputs; classifying time-series; learning dynamic processes.
Such an architecture results in dramatic improvements in memory footprint and
computational efficiency, with minimal performance loss.Comment: 10 pages, 10 figures, 1 tabl
The relation of phase noise and luminance contrast to overt attention in complex visual stimuli
Models of attention are typically based on difference maps in low-level features but neglect higher order stimulus structure. To what extent does higher order statistics affect human attention in natural stimuli? We recorded eye movements while observers viewed unmodified and modified images of natural scenes. Modifications included contrast modulations (resulting in changes to first- and second-order statistics), as well as the addition of noise to the Fourier phase (resulting in changes to higher order statistics). We have the following findings: (1) Subjects' interpretation of a stimulus as a “natural” depiction of an outdoor scene depends on higher order statistics in a highly nonlinear, categorical fashion. (2) Confirming previous findings, contrast is elevated at fixated locations for a variety of stimulus categories. In addition, we find that the size of this elevation depends on higher order statistics and reduces with increasing phase noise. (3) Global modulations of contrast bias eye position toward high contrasts, consistent with a linear effect of contrast on fixation probability. This bias is independent of phase noise. (4) Small patches of locally decreased contrast repel eye position less than large patches of the same aggregate area, irrespective of phase noise. Our findings provide evidence that deviations from surrounding statistics, rather than contrast per se, underlie the well-established relation of contrast to fixation
Faces and text attract gaze independent of the task: Experimental data and computer model
Previous studies of eye gaze have shown that when looking at images containing human faces, observers tend to rapidly
focus on the facial regions. But is this true of other high-level image features as well? We here investigate the extent to
which natural scenes containing faces, text elements, and cell phones - as a suitable control - attract attention by tracking
the eye movements of subjects in two types of tasks - free viewing and search. We observed that subjects in free-viewing
conditions look at faces and text 16.6 and 11.1 times more than similar regions normalized for size and position of the face
and text. In terms of attracting gaze, text is almost as effective as faces. Furthermore, it is difficult to avoid looking at faces
and text even when doing so imposes a cost. We also found that subjects took longer in making their initial saccade when
they were told to avoid faces/text and their saccades landed on a non-face/non-text object. We refine a well-known bottom–up
computer model of saliency-driven attention that includes conspicuity maps for color, orientation, and intensity by adding
high-level semantic information (i.e., the location of faces or text) and demonstrate that this significantly improves the
ability to predict eye fixations in natural images. Our enhanced model’s predictions yield an area under the ROC curve
over 84% for images that contain faces or text when compared against the actual fixation pattern of subjects. This
suggests that the primate visual system allocates attention using such an enhanced saliency map
Cellular Automata Can Reduce Memory Requirements of Collective-State Computing
Various non-classical approaches of distributed information processing, such
as neural networks, computation with Ising models, reservoir computing, vector
symbolic architectures, and others, employ the principle of collective-state
computing. In this type of computing, the variables relevant in a computation
are superimposed into a single high-dimensional state vector, the
collective-state. The variable encoding uses a fixed set of random patterns,
which has to be stored and kept available during the computation. Here we show
that an elementary cellular automaton with rule 90 (CA90) enables space-time
tradeoff for collective-state computing models that use random dense binary
representations, i.e., memory requirements can be traded off with computation
running CA90. We investigate the randomization behavior of CA90, in particular,
the relation between the length of the randomization period and the size of the
grid, and how CA90 preserves similarity in the presence of the initialization
noise. Based on these analyses we discuss how to optimize a collective-state
computing model, in which CA90 expands representations on the fly from short
seed patterns - rather than storing the full set of random patterns. The CA90
expansion is applied and tested in concrete scenarios using reservoir computing
and vector symbolic architectures. Our experimental results show that
collective-state computing with CA90 expansion performs similarly compared to
traditional collective-state models, in which random patterns are generated
initially by a pseudo-random number generator and then stored in a large
memory.Comment: 13 pages, 11 figure
- …