11 research outputs found
Similar but separate systems underlie perceptual bistability in vision and audition
The dynamics of perceptual bistability, the phenomenon in which perception switches between different interpretations of an unchanging stimulus, are characterised by very similar properties across a wide range of qualitatively different paradigms. This suggests that perceptual switching may be triggered by some common source. However, it is also possible that perceptual switching may arise from a distributed system, whose components vary according to the specifics of the perceptual experiences involved. Here we used a visual and an auditory task to determine whether individuals show cross-modal commonalities in perceptual switching. We found that individual perceptual switching rates were significantly correlated across modalities. We then asked whether perceptual switching arises from some central (modality-) task-independent process or from a more distributed task-specific system. We found that a log-normal distribution best explained the distribution of perceptual phases in both modalities, suggestive of a combined set of independent processes causing perceptual switching. Modality- and/or task-dependent differences in these distributions, and lack of correlation with the modality-independent central factors tested (ego-resiliency, creativity, and executive function), also point towards perceptual switching arising from a distributed system of similar but independent processes
Quasiperiodic perturbations of heteroclinic attractor networks
We consider heteroclinic attractor networks motivated by models of competition between neural populations during binocular rivalry. We show that gamma distributions of dominance times observed experimentally in binocular rivalry and other forms of bistable perception, commonly explained by means of noise in the models, can be achieved with quasiperiodic perturbations. For this purpose, we present a methodology based on the separatrix map to model the dynamics close to heteroclinic networks with quasiperiodic perturbations. Our methodology unifies two different approaches, one based on Melnikov integrals and the other one based on variational equations. We apply it to two models: first, to the Duffing equation, which comes from the perturbation of a Hamiltonian system and, second, to a heteroclinic attractor network for binocular rivalry, for which we develop a suitable method based on Melnikov integrals for non-Hamiltonian systems. In both models, the perturbed system shows chaotic behavior, while dominance times achieve good agreement with gamma distributions. Moreover, the separatrix map provides a new (discrete) model for bistable perception which, in addition, replaces the numerical integration of time-continuous models and, consequently, reduces the computational cost and avoids numerical instabilitiesPeer ReviewedPostprint (author's final draft
Bistable Perception Is Biased by Search Items but Not by Search Priming
Publisher's version (Ăștgefin grein)During visual search, selecting a target facilitates search for similar targets in the future, known as
search priming. During bistable perception, in turn, perceiving one interpretation facilitates
perception of the same interpretation in the future, a form of sensory memory. Previously, we
investigated the relation between these history effects by asking: can visual search influence
perception of a subsequent ambiguous display and can perception of an ambiguous display
influence subsequent visual search? We found no evidence for such influences, however. Here,
we investigated one potential factor that might have prevented such influences from arising: lack of
retinal overlap between the ambiguous stimulus and the search array items. In the present work,
we therefore interleaved presentations of an ambiguous stimulus with search trials in which the
target or distractor occupied the same retinal location as the ambiguous stimulus. Nevertheless,
we again found no evidence for influences of visual search on bistable perception, thus
demonstrating no close relation between search priming and sensory memory. We did,
however, find that visual search items primed perception of a subsequent ambiguous stimulus at
the same retinal location, regardless of whether they were a target or a distractor item: a form of
perceptual priming. Interestingly, the strengths of search priming and this perceptual priming were
correlated on a trial-to-trial basis, suggesting that a common underlying factor influences both.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or
publication of this article: M. A. B. B. is supported by the Icelandic Research Fund (Rannis, #130575-
051). A. K. is supported by the European Research Council (grant 643636), the Icelandic Research
Fund (#152427-051 & #173947-051), and the Research Fund at the University of IcelandPeer Reviewe
The âlawsâ of binocular rivalry: 50 years of Leveltâs propositions
It has been fifty years since Leveltâs monograph On Binocular Rivalry (1965) was published, but its four propositions that describe the relation between stimulus strength and the phenomenology of binocular rivalry remain a benchmark for theorists and experimentalists even today. In this review, we will revisit the original conception of the four propositions and the scientific landscape in which this happened. We will also provide a brief update concerning distributions of dominance durations, another aspect of Leveltâs monograph that has maintained a prominent presence in the field. In a critical evaluation of Leveltâs propositions against current knowledge of binocular rivalry we will then demonstrate that the original propositions are not completely compatible with what is known today, but that they can, in a straightforward way, be modified to encapsulate the progress that has been made over the past fifty years. The resulting modified, propositions are shown to apply to a broad range of bistable perceptual phenomena, not just binocular rivalry, and they allow important inferences about the underlying neural systems. We argue that these inferences reflect canonical neural properties that play a role in visual perception in general, and we discuss ways in which future research can build on the work reviewed here to attain a better understanding of these propertie
Recurrent network dynamics reconciles visual motion segmentation and integration
In sensory systems, a range of computational rules are presumed to be implemented by neuronal subpopulations with different tuning functions. For instance, in primate cortical area MT, different classes of direction-selective cells have been identified and related either to motion integration, segmentation or transparency. Still, how such different tuning properties are constructed is unclear. The dominant theoretical viewpoint based on ï»ża linear-nonlinear feed-forward cascade does not account for their complex temporal dynamics and their versatility when facing different input statistics. Here, we demonstrate that a recurrent network model of visual motion processing can reconcile these different properties. Using a ring network, we show how excitatory and inhibitory interactions can implement different computational rules such as vector averaging, winner-take-all or superposition. The model also captures ordered temporal transitions between these behaviors. In particular, depending on the inhibition regime the network can switch from motion integration to segmentation, thus being able to compute either a single pattern motion or to superpose multiple inputs as in motion transparency. We thus demonstrate that recurrent architectures can adaptively give rise to different cortical computational regimes depending upon the input statistics, from sensory flow integration to segmentation
Experimental Manipulation of Action Perception Based on Modeling Computations in Visual Cortex
Action perception, planning and execution is a broad area of study, crucial for future
development of clinical therapies treating social cognitive disorders, as well as for
building human-computer interaction systems and for giving foundation to an
emerging field of developmental robotics. We took interest in basic mechanisms of
action perception, and as a model area chose dynamic perception of body motion.
The focus of this thesis has been on understanding how perception of actions can be
manipulated, how to distill this understanding experimentally, and how to
summarize via numerical simulation the neural mechanisms helping explain
observed dynamic phenomena.
Experimentally we have, first, shown how a careful manipulation of a static object
depth cue can in principle modulate perception of actions. We chose the luminance
gradient as a model cue, and linked action perception to a perceptual prior previously
studied in object recognition â the lighting from above-prior. Second, we have
explored the dynamic relationship between representations of actions that are
naturally observed in spatiotemporal proximity. We have shown an adaptation
aftereffect that may speak of brain mechanisms encoding social interactions.
To qualitatively capture neural mechanisms behind ours and previous findings, we
have additionally appealed to the perceptual bistability phenomenon. Bistable
perception refers to the ability to spontaneously switch between two perceptual
alternatives arising from an observation of a single stimulus. Addition of depth cues
to biological motion stimulus resolves depth-ambiguity. To account for neural
dynamics as well as for modulation of action percept by light source position, we used
a combined architecture with a convolutional neural network computing shading and
form features in biological motion stimuli, and a 2-dimensional neural field coding for
walking direction and body configuration in the gait cycle. This single unified model
matches experimentally observed switching statistics, dependence of recognized
walking direction on the light source position, and makes a prediction for the
adaptation aftereffect in perception of biological motion
Neurophysiological investigation of the lateral prefrontal cortex during the task of binocular flash suppression
Multistable visual phenomena, wherein unchanging sensory input elicits in an observer, perceptual fluctuations, have been instrumental in unravelling the neural correlates of conscious perception. Such paradigms, when combined with single unit recordings in macaques trained to report their perception, have allowed neurophysiologists to elucidate, if the cells in various regions of the brain are correlated with subjective experience or respond to the invariant retinal input. Results obtained from such an approach has so far revealed that the proportion of feature selective cells which fire in concordance with perception, increase as one progresses in the ventral visual pathway, with this fraction being up to 90% in the temporal lobe.
The next station in the ventral stream of vision is the lateral prefrontal cortex (LPFC), which has reciprocal connectivity with the inferotemporal cortex and displays responses which are selective for complex visual stimuli. However, itâs not clear if this feature selective neural activity is just the result of sensory input or is related to subjective perception. Utilizing the task of binocular flash suppression (BFS), a psychophysical paradigm capable of dissociating perception from the retinal message, we probed the neural responses in the LPFC. The results revealed a robust perceptual modulation of both the spiking activity as well as high frequency gamma oscillations in this region of the brain. Even though single unit activity is robustly modulated according to perceptual content, a measure of effective functional connectivity between pairs of neurons, such as correlated variability could be revealing of interactions among neuronal populations during visual ambiguity. We therefore computed the spike count correlations across pairs of simultaneously recorded neurons during subjective visual perception. Interestingly, such interneuronal correlations among single units which preferred the same stimulus were close to zero during incongruent visual input, thus reflecting a modulation of the correlation structure during visual perception. Simulations with biophysically realistic networks suggested that the source of decorrelation was an active suppression of input fluctuations. This suggests that such a decorrelated state might be critical for representation of conscious content during visual conflict.
These results together provide credence to the âfrontal lobe hypothesisâ proposed by Crick and Koch, which suggested that the planning stages of the brain must have explicit access to the conscious visual percept so as to direct motor output. Such access is essential, if the LPFC needs to carry out one of its major function which is of cognitive control. Interestingly, when a control related signal, namely the modulation pattern of the beta band oscillations in the LPFC was analyzed, its modulation pattern was unchanged not only across monocular and incongruent visual stimulation but also during perceptual dominance and suppression. This suggests that a signal which is related to control processes is unaffected by local conscious or unconscious neural processing.
Lastly, we observed an enormous diversity among the patterns of single unit activity recorded in the LPFC and the neurons which displayed visual preference were just a minority. In order to elucidate, if there were any other patterns of activity which were related to the task, we clustered the neuronal responses using a non-negative matrix factorization (NNMF) method. This revealed five sequential dominant response patterns (or components) whose peaks were temporally distributed across various phases of the trial. A majority of the units with firing profiles similar to the patterns obtained, maintained their responses across monocular or incongruent stimulation suggesting that visual conflict did not affect their spiking modulation. Interestingly, an assessment of the effective functional connectivity across the pairs of neurons belonging to different temporally distributed components revealed that such correlated variability was maximum among units which were temporally coincident. However, we observed successive decorrelation as the pairs of units were chosen from temporally separated populations. This suggests a computational principle mediating a representation of sequential patterns of activity in the LPFC.
Together, the results presented in this thesis suggest a role for the LPFC in representation of conscious content. At the same time, we find that such a role of this region is coexistent with other major functions typically attributed to this area, such as cognitive control or temporal encoding of task events through sequential neural activity
Multi-stable perception balances stability and sensitivity
Supplemental information http://www.frontiersin.org/Computational_Neuroscience/10.3389/fncom.2013.00017/abstractWe report that multi-stable perception operates in a consistent, dynamical regime, balancing the conflicting goals of stability and sensitivity. When a multi-stable visual display is viewed continuously, its phenomenal appearance reverses spontaneously at irregular intervals. We characterized the perceptual dynamics of individual observers in terms of four statistical measures: the distribution of dominance times (mean and variance) and the novel, subtle dependence on prior history (correlation and time-constant). The dynamics of multi-stable perception is known to reflect several stabilizing and destabilizing factors. Phenomenologically, its main aspects are captured by a simplistic computational model with competition, adaptation, and noise. We identified small parameter volumes (3% of the possible volume) in which the model reproduced both dominance distribution and history-dependence of each observer. For 21 of 24 data sets, the identified volumes clustered tightly (15% of the possible volume), revealing a consistent âoperating regimeâ of multi-stable perception. The âoperating regimeâ turned out to be marginally stable or, equivalently, near the brink of an oscillatory instability. The chance probability of the observed clustering was <0.02. To understand the functional significance of this empirical âoperating regime,â we compared it to the theoretical âsweet spotâ of the model. We computed this âsweet spotâ as the intersection of the parameter volumes in which the model produced stable perceptual outcomes and in which it was sensitive to input modulations. Remarkably, the empirical âoperating regimeâ proved to be largely coextensive with the theoretical âsweet spot.â This demonstrated that perceptual dynamics was not merely consistent but also functionally optimized (in that it balances stability with sensitivity). Our results imply that multi-stable perception is not a laboratory curiosity, but reflects a functional optimization of perceptual dynamics for visual inference.Alexander Pastukhov, Joachim Haenicke, and Jochen Braun: BMBF Bernstein Network, EU FP7-269459. Gustavo Deco: BFU2007-61710, Consolider Ingenio 2010, FP7 Brainsync, ITN Codde. Antoni Guillamon: MICINN/FEDER MTM2009-06973 and CUR-DIUE 2009SGR-859. Pedro E. GarcĂa-RodrĂguez: BFU2007-61710
Multi-stable perception balances stability and sensitivity
We report that multi-stable perception operates in a consistent, dynamical regime, balancing the conflicting goals of stability and sensitivity. When a multi-stable visual display is viewed continuously, its phenomenal appearance reverses spontaneously at irregular intervals. We characterized the perceptual dynamics of individual observers in terms of four statistical measures: the distribution of dominance times (mean and variance) and the novel, subtle dependence on prior history (correlation and time-constant). The dynamics of multi-stable perception is known to reflect several stabilizing and destabilizing factors. Phenomenologically, its main aspects are captured by a simplistic computational model with competition, adaptation, and noise. We identified small parameter volumes (~3% of the possible volume) in which the model reproduced both dominance distribution and history-dependence of each observer. For 21 of 24 data sets, the identified volumes clustered tightly (~15% of the possible volume), revealing a consistent "operating regime" of multi-stable perception. The "operating regime" turned out to be marginally stable or, equivalently, near the brink of an oscillatory instability. The chance probability of the observed clustering was <0.02. To understand the functional significance of this empirical "operating regime," we compared it to the theoretical "sweet spot" of the model. We computed this "sweet spot" as the intersection of the parameter volumes in which the model produced stable perceptual outcomes and in which it was sensitive to input modulations. Remarkably, the empirical "operating regime" proved to be largely coextensive with the theoretical "sweet spot." This demonstrated that perceptual dynamics was not merely consistent but also functionally optimized (in that it balances stability with sensitivity). Our results imply that multi-stable perception is not a laboratory curiosity, but reflects a functional optimization of perceptual dynamics for visual inference.Peer Reviewe