313 research outputs found

    Spatial representations of object locations and environment shape

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Brain and Cognitive Sciences, 1999.Includes bibliographical references (leaves 73-75).by Ranxiao Wang.Ph.D

    Perception and action without 3D coordinate frames

    Get PDF
    Neuroscientists commonly assume that the brain generates representations of a scene in various non-retinotopic 3D coordinate frames, for example in 'egocentric' and 'allocentric' frames. Although neurons in early visual cortex might be described as representing a scene in an eye-centred frame, using 2 dimensions of visual direction and one of binocular disparity, there is no convincing evidence of similarly organized cortical areas using non-retinotopic 3D coordinate frames nor of any systematic transfer of information from one frame to another. We propose that perception and action in a 3D world could be achieved without generating ego- or allocentric 3D coordinate frames. Instead, we suggest that the fundamental operation the brain carries out is to compare a long state vector with a matrix of weights (essentially, a long look-up table) to choose an output (often, but not necessarily, a motor output). The processes involved in perception of a 3D scene and action within it depend, we suggest, on successive iterations of this basic operation. Advantages of this proposal include the fact that it relies on computationally well-defined operations corresponding to well-established neural processes. Also, we argue that from a philosophical perspective it is at least as plausible as theories postulating 3D coordinate frames. Finally, we suggest a variety of experiments that would falsify our claim

    Perception and action without 3D coordinate frames

    Get PDF
    Neuroscientists commonly assume that the brain generates representations of a scene in various non-retinotopic 3D coordinate frames, for example in 'egocentric' and 'allocentric' frames. Although neurons in early visual cortex might be described as representing a scene in an eye-centred frame, using 2 dimensions of visual direction and one of binocular disparity, there is no convincing evidence of similarly organized cortical areas using non-retinotopic 3D coordinate frames nor of any systematic transfer of information from one frame to another. We propose that perception and action in a 3D world could be achieved without generating ego- or allocentric 3D coordinate frames. Instead, we suggest that the fundamental operation the brain carries out is to compare a long state vector with a matrix of weights (essentially, a long look-up table) to choose an output (often, but not necessarily, a motor output). The processes involved in perception of a 3D scene and action within it depend, we suggest, on successive iterations of this basic operation. Advantages of this proposal include the fact that it relies on computationally well-defined operations corresponding to well-established neural processes. Also, we argue that from a philosophical perspective it is at least as plausible as theories postulating 3D coordinate frames. Finally, we suggest a variety of experiments that would falsify our claim

    How Pantomime Works: Implications for Theories of Language Origin

    Get PDF
    Pantomime refers to iconic gesturing that is done for communicative purposes in the absence of speech. Gestural theories of the origins of language claim that a stage of pantomime preceded speech as an initial form of referential communication. However, gestural theories conceive of pantomime as a unitary process, and do not distinguish among the various means by which it can be produced. We attempt here to develop a scheme for classifying pantomime based on a proposal of two new sub-categories of pantomime, resulting in a final scheme comprised of five categories of iconic gesturing. We employ the scheme to establish associations between the category of pantomime used and the type of action and/or object being depicted. Based on these associations, we argue that there are two basic modes of pantomiming and that these apply to distinct semantic categories of referents. These modes of pantomiming lead to two alternative models for a gestural origin of language, one based on people and one based on the environment

    Time and Phonology: Precedence-Based Representations

    Get PDF
    A major factor hindering the establishment of a successful neuroscience of phonology centers around the biological viability of a given phonological framework. The ultimate aim of this project is to find potential alignments between linguistics and neuroscience. In this vein, the main topic of the thesis rests upon establishing the minimal complexity requirements for a phonological representation that is biologically plausible, cognitively sound, and empirically motivated. Heeding Minimalist proposals (Chomsky, 1995) that encourage efficiency in computation and economy in representation, I embark on an in-depth exploration of the parameters of cognition that are necessary and sufficient in a phonological representation while discounting the processes and parameters that can be said to be “domain-general”. To that end, I take seriously Ernst Po ̈ppel’s (2004) exhortation to consider the role of temporal events like linear order and precedence in the study of cognitive systems like phonology by surveying the literature on time perception. The conclusions support a separation of order from phonological representations, extending the scope of substance-freeness (Hale and Reiss, 2000) by characterizing order as substance. Such an approach can contribute to thoroughly defining the object of study and offer insight that narrows the search space for potential bridges

    Effect of Terminal Haptic Feedback on the Sensorimotor Control of Visually and Tactile-Guided Grasping

    Get PDF
    When grasping a physical object, the sensorimotor system is able to specify grip aperture via absolute sensory information. In contrast, grasping to a location previously occupied by (no-target pantomime-grasp) or adjacent to (spatially dissociated pantomime-grasp) an object results in the specification of grip aperture via relative sensory information. It is important to recognize that grasping a physical object and pantomime-grasping differ not only in terms of their spatial properties but also with respect to the availability of haptic feedback. Thus, the objective of this dissertation was to investigate how terminal haptic feedback influences the underlying mechanisms that support goal-directed grasping in visual- and tactile-based settings. In Chapter Two I sought to determine whether absolute haptic feedback influences tactile-based cues supporting grasps performed to the location previously occupied by an object. Results demonstrated that when haptic feedback was presented at the end of the response absolute haptic signals were incorporated in grasp production. Such a finding indicates that haptic feedback supports the absolute calibration between a tactile defined object and the required motor output. In Chapter Three I examined whether haptic feedback influences the information supporting visually guided no-target pantomime-grasps in a manner similar to tactile-guided grasping. Results showed that haptic sensory signals support no-target pantomime-grasping when provided at the end of the response. Accordingly, my findings demonstrated that a visuo-haptic calibration supports the absolute specification of object size and highlights the role of multisensory integration in no-target pantomime-grasping. Importantly, however, Chapter Four demonstrated that a priori knowledge of haptic feedback is necessary to support the aforementioned calibration process. In Chapter Five I demonstrates that, unlike no-target pantomime-grasps, spatially dissociated pantomime-grasps precluded a visuo-haptic calibration. Accordingly, I propose that the top-down demands of decoupling stimulus-response relations in spatially dissociated pantomime-grasping renders aperture shaping via a visual percept that is immutable to the integration of haptic feedback. In turn, the decreased top-down demands of no-target pantomime-grasps allows haptic feedback to serve as a reliable sensory resource supporting an absolute visuo-haptic calibration

    Misperception of rigidity from actively generated optic flow

    Get PDF
    It is conventionally assumed that the goal of the visual system is to derive a perceptual representation that is a veridical reconstruction of the external world: a reconstruction that leads to optimal accuracy and precision of metric estimates, given sensory information. For example, 3-D structure is thought to be veridically recovered from optic flow signals in combination with egocentric motion information and assumptions of the stationarity and rigidity of the external world. This theory predicts veridical perceptual judgments under conditions that mimic natural viewing, while ascribing nonoptimality under laboratory conditions to unreliable or insufficient sensory information\u2014for example, the lack of natural and measurable observer motion. In two experiments, we contrasted this optimal theory with a heuristic theory that predicts the derivation of perceived 3-D structure based on the velocity gradients of the retinal flow field without the use of egomotion signals or a rigidity prior. Observers viewed optic flow patterns generated by their own motions relative to two surfaces and later viewed the same patterns while stationary. When the surfaces were part of a rigid structure, static observers systematically perceived a nonrigid structure, consistent with the predictions of both an optimal and a heuristic model. Contrary to the optimal model, moving observers also perceived nonrigid structures in situations where retinal and extraretinal signals, combined with a rigidity assumption, should have yielded a veridical rigid estimate. The perceptual biases were, however, consistent with a heuristic model which is only based on an analysis of the optic flow

    Cognitive Principles of Schematisation for Wayfinding Assistance

    Get PDF
    People often need assistance to successfully perform wayfinding tasks in unfamiliar environments. Nowadays, a huge variety of wayfinding assistance systems exists. All these systems intend to present the needed information for a certain wayfinding situation in an adequate presentation. Some wayfinding assistance systems utilize findings for the field of cognitive sciences to develop and design cognitive ergonomic approaches. These approaches aim to be systems with which the users can effortless interact with and which present needed information in a way the user can acquire the information naturally. Therefore it is necessary to determinate the information needs of the user in a certain wayfinding task and to investigate how this information is processed and conceptualised by the wayfinder to be able to present it adequately. Cognitive motivated schematic maps are an example which employ this knowledge and emphasise relevant information and present it in an easily readable way. In my thesis I present a transfer approach to reuse the knowledge of well-grounded knowledge of schematisation techniques from one externalisation such as maps to another externalization such as virtual environments. A analysis of the informational need of the specific wayfinding task route following is done one the hand of a functional decomposition as well as a deep analysis of representation-theoretic consideration of the external representations maps and virtual environments. Concluding from these results, guidelines for transferring schematisation principles between different representation types are proposed. Specifically, this thesis chose the exemplary transfer of the schematisation technique wayfinding choremes from a map presentation into a virtual environment to present the theoretic requirements for a successful transfer. Wayfinding choremes are abstract mental concepts of turning action which are accessible as graphical externalisation integrated into route maps. These wayfinding choremes maps emphasis the turning action along the route by displaying the angular information as prototypes of 45° or 90°. This schematisation technique enhances wayfinding performance by supporting the matching processes between the map representation and the internal mental representation of the user. I embed the concept of wayfinding choremes into a virtual environment and present a study to test if the transferred schematisation technique also enhance the wayfinding performance. The empirical investigations present a successful transfer of the concept of the wayfinding choremes. Depending on the complexity of the route the embedded schematization enhance the wayfinding performance of participants who try to follow a route from memory. Participants who trained and recall the route in a schematised virtual environment make fewer errors than the participants of the unmodified virtual world. This thesis sets an example of the close research circle of cognitive behavioural studies to representation-theoretical considerations to applications of wayfinding assistance and their evaluations back to new conclusions in cognitive science. It contributes an interdisciplinary comprehensive inspection of the interplay of environmental factors and mental processes on the example of angular information and mental distortion of this information
    • 

    corecore