731,967 research outputs found

    Visual Adaptation

    Get PDF

    Visual adaptation to goal-directed hand actions

    Get PDF
    Prolonged exposure to visual stimuli, or adaptation, often results in an adaptation “aftereffect” which can profoundly distort our perception of subsequent visual stimuli. This technique has been commonly used to investigate mechanisms underlying our perception of simple visual stimuli, and more recently, of static faces. We tested whether humans would adapt to movies of hands grasping and placing different weight objects. After adapting to hands grasping light or heavy objects, subsequently perceived objects appeared relatively heavier, or lighter, respectively. The aftereffects increased logarithmically with adaptation action repetition and decayed logarithmically with time. Adaptation aftereffects also indicated that perception of actions relies predominantly on view-dependent mechanisms. Adapting to one action significantly influenced the perception of the opposite action. These aftereffects can only be explained by adaptation of mechanisms that take into account the presence/absence of the object in the hand. We tested if evidence on action processing mechanisms obtained using visual adaptation techniques confirms underlying neural processing. We recorded monkey superior temporal sulcus (STS) single-cell responses to hand actions. Cells sensitive to grasping or placing typically responded well to the opposite action; cells also responded during different phases of the actions. Cell responses were sensitive to the view of the action and were dependent upon the presence of the object in the scene. We show here that action processing mechanisms established using visual adaptation parallel the neural mechanisms revealed during recording from monkey STS. Visual adaptation techniques can thus be usefully employed to investigate brain mechanisms underlying action perception.Publisher PDFPeer reviewe

    Network adaptation improves temporal representation of naturalistic stimuli in drosophila eye: II Mechanisms

    Get PDF
    Retinal networks must adapt constantly to best present the ever changing visual world to the brain. Here we test the hypothesis that adaptation is a result of different mechanisms at several synaptic connections within the network. In a companion paper (Part I), we showed that adaptation in the photoreceptors (R1-R6) and large monopolar cells (LMC) of the Drosophila eye improves sensitivity to under-represented signals in seconds by enhancing both the amplitude and frequency distribution of LMCs' voltage responses to repeated naturalistic contrast series. In this paper, we show that such adaptation needs both the light-mediated conductance and feedback-mediated synaptic conductance. A faulty feedforward pathway in histamine receptor mutant flies speeds up the LMC output, mimicking extreme light adaptation. A faulty feedback pathway from L2 LMCs to photoreceptors slows down the LMC output, mimicking dark adaptation. These results underline the importance of network adaptation for efficient coding, and as a mechanism for selectively regulating the size and speed of signals in neurons. We suggest that concert action of many different mechanisms and neural connections are responsible for adaptation to visual stimuli. Further, our results demonstrate the need for detailed circuit reconstructions like that of the Drosophila lamina, to understand how networks process information

    AdaGraph: Unifying Predictive and Continuous Domain Adaptation through Graphs

    Full text link
    The ability to categorize is a cornerstone of visual intelligence, and a key functionality for artificial, autonomous visual machines. This problem will never be solved without algorithms able to adapt and generalize across visual domains. Within the context of domain adaptation and generalization, this paper focuses on the predictive domain adaptation scenario, namely the case where no target data are available and the system has to learn to generalize from annotated source images plus unlabeled samples with associated metadata from auxiliary domains. Our contributionis the first deep architecture that tackles predictive domainadaptation, able to leverage over the information broughtby the auxiliary domains through a graph. Moreover, we present a simple yet effective strategy that allows us to take advantage of the incoming target data at test time, in a continuous domain adaptation scenario. Experiments on three benchmark databases support the value of our approach.Comment: CVPR 2019 (oral

    Self-Supervised Deep Visual Odometry with Online Adaptation

    Full text link
    Self-supervised VO methods have shown great success in jointly estimating camera pose and depth from videos. However, like most data-driven methods, existing VO networks suffer from a notable decrease in performance when confronted with scenes different from the training data, which makes them unsuitable for practical applications. In this paper, we propose an online meta-learning algorithm to enable VO networks to continuously adapt to new environments in a self-supervised manner. The proposed method utilizes convolutional long short-term memory (convLSTM) to aggregate rich spatial-temporal information in the past. The network is able to memorize and learn from its past experience for better estimation and fast adaptation to the current frame. When running VO in the open world, in order to deal with the changing environment, we propose an online feature alignment method by aligning feature distributions at different time. Our VO network is able to seamlessly adapt to different environments. Extensive experiments on unseen outdoor scenes, virtual to real world and outdoor to indoor environments demonstrate that our method consistently outperforms state-of-the-art self-supervised VO baselines considerably.Comment: Accepted by CVPR 2020 ora

    Visual Learning In The Perception Of Texture: Simple And Contingent Aftereffects Of Texture Density

    Get PDF
    Novel results elucidating the magnitude, binocularity and retinotopicity of aftereffects of visual texture density adaptation are reported as is a new contingent aftereffect of texture density which suggests that the perception of visual texture density is quite malleable. Texture aftereffects contingent upon orientation, color and temporal sequence are discussed. A fourth effect is demonstrated in which auditory contingencies are shown to produce a different kind of visual distortion. The merits and limitations of error-correction and classical conditioning theories of contingent adaptation are reviewed. It is argued that a third kind of theory which emphasizes coding efficiency and informational considerations merits close attention. It is proposed that malleability in the registration of texture information can be understood as part of the functional adaptability of perception

    Multiparameter vision testing apparatus

    Get PDF
    Compact vision testing apparatus is described for testing a large number of physiological characteristics of the eyes and visual system of a human subject. The head of the subject is inserted into a viewing port at one end of a light-tight housing containing various optical assemblies. Visual acuity and other refractive characteristics and ocular muscle balance characteristics of the eyes of the subject are tested by means of a retractable phoroptor assembly carried near the viewing port and a film cassette unit carried in the rearward portion of the housing (the latter selectively providing a variety of different visual targets which are viewed through the optical system of the phoroptor assembly). The visual dark adaptation characteristics and absolute brightness threshold of the subject are tested by means of a projector assembly which selectively projects one or both of a variable intensity fixation target and a variable intensity adaptation test field onto a viewing screen located near the top of the housing

    Separation of Visual and Motor Workspaces During Targeted Reaching Results in Limited Generalization of Visuomotor Adaptation

    Get PDF
    Separating visual and proprioceptive information in terms of workspace locations during reaching movement has been shown to disturb transfer of visuomotor adaptation across the arms. Here, we investigated whether separating visual and motor workspaces would also disturb generalization of visuomotor adaptation across movement conditions within the same arm. Subjects were divided into four experimental groups (plus three control groups). The first two groups adapted to a visual rotation under a “dissociation” condition in which the targets for reaching movement were presented in midline while their arm performed reaching movement laterally. Following that, they were tested in an “association” condition in which the visual and motor workspaces were combined in midline or laterally. The other two groups first adapted to the rotation in one association condition (medial or lateral), then were tested in the other association condition. The latter groups demonstrated complete transfer from the training to the generalization session, whereas the former groups demonstrated substantially limited transfer. These findings suggest that when visual and motor workspaces are separated, two internal models (vision-based one, proprioception-based one) are formed, and that a conflict between the two disrupts the development of an overall representation that underlies adaptation to a novel visuomotor transform
    corecore