201 research outputs found
On the neural substrates leading to the emergence of mental operational structures
A developmental approach to the study of the emergence of mental operational structures in neural networks is presented. Neural architectures proposed to underlie the six stages of the sensory-motor period are discussed
Perception, cognition, and action in hyperspaces: implications on brain plasticity, learning, and cognition
We live in a three-dimensional (3D) spatial world; however, our retinas receive a pair of 2D projections of the 3D environment. By using multiple cues, such as disparity, motion parallax, perspective, our brains can construct 3D representations of the world from the 2D projections on our retinas. These 3D representations underlie our 3D perceptions of the world and are mapped into our motor systems to generate accurate sensorimotor behaviors. Three-dimensional perceptual and sensorimotor capabilities emerge during development: the physiology of the growing baby changes hence necessitating an ongoing re-adaptation of the mapping between 3D sensory representations and the motor coordinates. This adaptation continues in adulthood and is quite general to successfully deal with joint-space changes (longer arms due to growth), skull and eye size changes (and still being able of accurate eye movements), etc. A fundamental question is whether our brains are inherently limited to 3D representations of the environment because we are living in a 3D world, or alternatively, our brains may have the inherent capability and plasticity of representing arbitrary dimensions; however, 3D representations emerge from the fact that our development and learning take place in a 3D world. Here, we review research related to inherent capabilities and limitations of brain plasticity in terms of its spatial representations and discuss whether with appropriate training, humans can build perceptual and sensorimotor representations of spatial 4D environments, and how the presence or lack of ability of a solid and direct 4D representation can reveal underlying neural representations of space.Published versio
Self-organization via active exploration in robotic applications
We describe a neural network based robotic system. Unlike traditional robotic systems, our approach focussed on non-stationary problems. We indicate that self-organization capability is necessary for any system to operate successfully in a non-stationary environment. We suggest that self-organization should be based on an active exploration process. We investigated neural architectures having novelty sensitivity, selective attention, reinforcement learning, habit formation, flexible criteria categorization properties and analyzed the resulting behavior (consisting of an intelligent initiation of exploration) by computer simulations. While various computer vision researchers acknowledged recently the importance of active processes (Swain and Stricker, 1991), the proposed approaches within the new framework still suffer from a lack of self-organization (Aloimonos and Bandyopadhyay, 1987; Bajcsy, 1988). A self-organizing, neural network based robot (MAVIN) has been recently proposed (Baloch and Waxman, 1991). This robot has the capability of position, size rotation invariant pattern categorization, recognition and pavlovian conditioning. Our robot does not have initially invariant processing properties. The reason for this is the emphasis we put on active exploration. We maintain the point of view that such invariant properties emerge from an internalization of exploratory sensory-motor activity. Rather than coding the equilibria of such mental capabilities, we are seeking to capture its dynamics to understand on the one hand how the emergence of such invariances is possible and on the other hand the dynamics that lead to these invariances. The second point is crucial for an adaptive robot to acquire new invariances in non-stationary environments, as demonstrated by the inverting glass experiments of Helmholtz. We will introduce Pavlovian conditioning circuits in our future work for the precise objective of achieving the generation, coordination, and internalization of sequence of actions
Recommended from our members
Stream specificity and asymmetries in feature binding and content-addressable access in visual encoding and memory
YesHuman memory is content addressable—i.e., contents of
the memory can be accessed using partial information
about the bound features of a stored item. In this study,
we used a cross-feature cuing technique to examine how
the human visual system encodes, binds, and retains
information about multiple stimulus features within a
set of moving objects. We sought to characterize the
roles of three different features (position, color, and
direction of motion, the latter two of which are
processed preferentially within the ventral and dorsal
visual streams, respectively) in the construction and
maintenance of object representations. We investigated
the extent to which these features are bound together
across the following processing stages: during stimulus
encoding, sensory (iconic) memory, and visual shortterm
memory. Whereas all features examined here can
serve as cues for addressing content, their effectiveness
shows asymmetries and varies according to cue–report
pairings and the stage of information processing and
storage. Position-based indexing theories predict that
position should be more effective as a cue compared to
other features. While we found a privileged role for
position as a cue at the stimulus-encoding stage, position
was not the privileged cue at the sensory and visual
short-term memory stages. Instead, the pattern that
emerged from our findings is one that mirrors the
parallel processing streams in the visual system. This
stream-specific binding and cuing effectiveness
manifests itself in all three stages of information
processing examined here. Finally, we find that the Leaky
Flask model proposed in our previous study is applicable
to all three features
PRISM: protein interactions by structural matching
Prism () is a website for protein interface analysis and prediction of putative protein–protein interactions. It is composed of a database holding protein interface structures derived from the Protein Data Bank (PDB). The server also includes summary information about related proteins and an interactive protein interface viewer. A list of putative protein–protein interactions obtained by running our prediction algorithm can also be accessed. These results are applied to a set of protein structures obtained from the PDB at the time of algorithm execution (January 2004). Users can browse through the non-redundant dataset of representative interfaces on which the prediction algorithm depends, retrieve the list of similar structures to these interfaces or see the results of interaction predictions for a particular protein. Another service provided is interactive prediction. This is done by running the algorithm for user input structures
Color and motion: which is the tortoise and which is the hare?
AbstractRecent psychophysical studies have been interpreted to indicate that the perception of motion temporally either lags or is synchronous with the perception of color. These results appear to be at odds with neurophysiological data, which show that the average response-onset latency is shorter in the cortical areas responsible for motion (e.g., MT and MST) than for color processing (e.g., V4). The purpose of this study was to compare the perceptual asynchrony between motion and color on two psychophysical tasks. In the color correspondence task, observers indicated the predominant color of an 18°×18° field of colored dots when they moved in a specific direction. On each trial, the dots periodically changed color from red to green and moved cyclically at 15, 30 or 60 deg/s in two directions separated by 180°, 135°, 90° or 45°. In the temporal order judgment task, observers indicated whether a change in color occurred before or after a change in motion, within a single cycle of the moving-dot stimulus. In the color correspondence task, we found that the perceptual asynchrony between color and motion depends on the difference in directions within the motion cycle, but does not depend on the dot velocity. In the temporal order judgment task, the perceptual asynchrony is substantially shorter than for the color correspondence task, and does not depend on the change in motion direction or the dot velocity. These findings suggest that it is inappropriate to interpret previous psychophysical results as evidence that motion perception generally lags color perception. We discuss our data in the context of a “two-stage sustained-transient” functional model for the processing of various perceptual attributes
A New Conceptualization of Human Visual Sensory-Memory
Memory is an essential component of cognition and disorders of memory have significant individual and societal costs. The Atkinson-Shiffrin "modal model" forms the foundation of our understanding of human memory. It consists of three stores: Sensory Memory (SM), whose visual component is called iconic memory, Short-Term Memory (STM; also called working memory, WM), and Long-Term Memory (LTM). Since its inception, shortcomings of all three components of the modal model have been identified. While the theories of STM and LTM underwent significant modifications to address these shortcomings, models of the iconic memory remained largely unchanged: A high capacity but rapidly decaying store whose contents are encoded in retinotopic coordinates, i.e., according to how the stimulus is projected on the retina. The fundamental shortcoming of iconic memory models is that, because contents are encoded in retinotopic coordinates, the iconic memory cannot hold any useful information under normal viewing conditions when objects or the subject are in motion. Hence, half-century after its formulation, it remains an unresolved problem whether and how the first stage of the modal model serves any useful function and how subsequent stages of the modal model receive inputs from the environment. Here, we propose a new conceptualization of human visual sensory memory by introducing an additional component whose reference-frame consists of motion-grouping based coordinates rather than retinotopic coordinates. We review data supporting this new model and discuss how it offers solutions to the paradoxes of the traditional model of sensory memory
TRT–Custom Typeface Design Project
TRT is a custom typeface design project which was commissioned by Turkey's national broadcasting channel, TRT, (Türkiye Radyo Televizyon Kurumu) through TBWA / Istanbul global ad agency. This sanserif typeface family consists of four weights with their accompanying italics. The TRT typeface project was designed by Didem Öğmen, and optimized and extended by Onur Yazıcıgil to cover all Western, Central and Eastern Latin character sets
Perception of rigidity in three- and four-dimensional spaces
First author draf
Motions of Parts and Wholes: An Exogenous Reference-Frame Model of Non-Retinotopic Processing
We developed a model for non-retinotopic, object-centered motion processing
- …