2,650 research outputs found
Using Variable Dwell Time to Accelerate Gaze-Based Web Browsing with Two-Step Selection
In order to avoid the "Midas Touch" problem, gaze-based interfaces for
selection often introduce a dwell time: a fixed amount of time the user must
fixate upon an object before it is selected. Past interfaces have used a
uniform dwell time across all objects. Here, we propose a gaze-based browser
using a two-step selection policy with variable dwell time. In the first step,
a command, e.g. "back" or "select", is chosen from a menu using a dwell time
that is constant across the different commands. In the second step, if the
"select" command is chosen, the user selects a hyperlink using a dwell time
that varies between different hyperlinks. We assign shorter dwell times to more
likely hyperlinks and longer dwell times to less likely hyperlinks. In order to
infer the likelihood each hyperlink will be selected, we have developed a
probabilistic model of natural gaze behavior while surfing the web. We have
evaluated a number of heuristic and probabilistic methods for varying the dwell
times using both simulation and experiment. Our results demonstrate that
varying dwell time improves the user experience in comparison with fixed dwell
time, resulting in fewer errors and increased speed. While all of the methods
for varying dwell time resulted in improved performance, the probabilistic
models yielded much greater gains than the simple heuristics. The best
performing model reduces error rate by 50% compared to 100ms uniform dwell time
while maintaining a similar response time. It reduces response time by 60%
compared to 300ms uniform dwell time while maintaining a similar error rate.Comment: This is an Accepted Manuscript of an article published by Taylor &
Francis in the International Journal of Human-Computer Interaction on 30
March, 2018, available online:
http://www.tandfonline.com/10.1080/10447318.2018.1452351 . For an eprint of
the final published article, please access:
https://www.tandfonline.com/eprint/T9d4cNwwRUqXPPiZYm8Z/ful
Intrinsically Motivated Learning of Visual Motion Perception and Smooth Pursuit
We extend the framework of efficient coding, which has been used to model the
development of sensory processing in isolation, to model the development of the
perception/action cycle. Our extension combines sparse coding and reinforcement
learning so that sensory processing and behavior co-develop to optimize a
shared intrinsic motivational signal: the fidelity of the neural encoding of
the sensory input under resource constraints. Applying this framework to a
model system consisting of an active eye behaving in a time varying
environment, we find that this generic principle leads to the simultaneous
development of both smooth pursuit behavior and model neurons whose properties
are similar to those of primary visual cortical neurons selective for different
directions of visual motion. We suggest that this general principle may form
the basis for a unified and integrated explanation of many perception/action
loops.Comment: 6 pages, 5 figure
Invariant feature extraction from event based stimuli
We propose a novel architecture, the event-based GASSOM for learning and
extracting invariant representations from event streams originating from
neuromorphic vision sensors. The framework is inspired by feed-forward cortical
models for visual processing. The model, which is based on the concepts of
sparsity and temporal slowness, is able to learn feature extractors that
resemble neurons in the primary visual cortex. Layers of units in the proposed
model can be cascaded to learn feature extractors with different levels of
complexity and selectivity. We explore the applicability of the framework on
real world tasks by using the learned network for object recognition. The
proposed model achieve higher classification accuracy compared to other
state-of-the-art event based processing methods. Our results also demonstrate
the generality and robustness of the method, as the recognizers for different
data sets and different tasks all used the same set of learned feature
detectors, which were trained on data collected independently of the testing
data.Comment: 6 page
Competitively coupled orientation selective cellular neural networks
We extend previous work in orientation selective cellular neural networks to include competitive couplings between different layers tuned to different orientations and spatial frequencies. The presence of these interactions sharpens the spatial frequency tuning of the filters in two ways, when compared to a similar architecture proposed previously which lacks these interactions. The first is the introduction of nulls in the frequency response. The second is the introduction of constraints on the passbands of the coupled layers. Based on an understanding of these two effects, we propose a method for choosing spatial frequency tunings of the individual layers to enhance orientation selectivity in the coupled system
- …