7,960 research outputs found

    Recovering pyramid WS gain in non-common path aberration correction mode via deformable lens

    Full text link
    It is by now well known that pyramid based wavefront sensors, once in closed loop, have the capability to improve more and more the gain as the reference natural star image size is getting smaller on the pyramid pin. Especially in extreme adaptive optics applications, in order to correct the non-common path aberrations between the scientific and sensing channel, it is common use to inject a certain amount of offset wavefront deformation into the DM(s), departing at the same time the pyramid from the optimal working condition. In this paper we elaborate on the possibility to correct the low order non-common path aberrations at the pyramid wavefront sensor level by means of an adaptive refractive lens placed on the optical path before the pyramid itself, allowing the mitigation of the gain loss

    Learning the visual–oculomotor transformation: effects on saccade control and space representation

    Get PDF
    Active eye movements can be exploited to build a visuomotor representation of the surrounding environment. Maintaining and improving such representation requires to update the internal model involved in the generation of eye movements. From this perspective, action and perception are thus tightly coupled and interdependent. In this work, we encoded the internal model for oculomotor control with an adaptive filter inspired by the functionality of the cerebellum. Recurrent loops between a feed-back controller and the internal model allow our system to perform accurate binocular saccades and create an implicit representation of the nearby space. Simulation results show that this recurrent architecture outperforms classical feedback-error-learning in terms of both accuracy and sensitivity to system parameters. The proposed approach was validated implementing the framework on an anthropomorphic robotic head

    Adaptive saccade controller inspired by the primates' cerebellum

    Get PDF
    Saccades are fast eye movements that allow humans and robots to bring the visual target in the center of the visual field. Saccades are open loop with respect to the vision system, thus their execution require a precise knowledge of the internal model of the oculomotor system. In this work, we modeled the saccade control, taking inspiration from the recurrent loops between the cerebellum and the brainstem. In this model, the brainstem acts as a fixed-inverse model of the oculomotor system, while the cerebellum acts as an adaptive element that learns the internal model of the oculomotor system. The adaptive filter is implemented using a state-of-the-art neural network, called I-SSGPR. The proposed approach, namely recurrent architecture, was validated through experiments performed both in simulation and on an antropomorphic robotic head. Moreover, we compared the recurrent architecture with another model of the cerebellum, the feedback error learning. Achieved results show that the recurrent architecture outperforms the feedback error learning in terms of accuracy and insensitivity to the choice of the feedback controller

    Decoding information for grasping from the macaque dorsomedial visual stream

    Get PDF
    Neurodecoders have been developed by researchers mostly to control neuroprosthetic devices, but also to shed new light on neural functions. In this study, we show that signals representing grip configurations can be reliably decoded from neural data acquired from area V6A of the monkey medial posterior parietal cortex. Two Macaca fascicularis monkeys were trained to perform an instructed-delay reach-to-grasp task in the dark and in the light toward objects of different shapes. Population neural activity was extracted at various time intervals on vision of the objects, the delay before movement, and grasp execution. This activity was used to train and validate a Bayes classifier used for decoding objects and grip types. Recognition rates were well over chance level for all the epochs analyzed in this study. Furthermore, we detected slightly different decoding accuracies, depending on the task's visual condition. Generalization analysis was performed by training and testing the system during different time intervals. This analysis demonstrated that a change of code occurred during the course of the task. Our classifier was able to discriminate grasp types fairly well in advance with respect to grasping onset. This feature might be important when the timing is critical to send signals to external devices before the movement start. Our results suggest that the neural signals from the dorsomedial visual pathway can be a good substrate to feed neural prostheses for prehensile actions

    Unsupervised grounding of textual descriptions of object features and actions in video

    Get PDF
    We propose a novel method for learning visual concepts and their correspondence to the words of a natural language. The concepts and correspondences are jointly inferred from video clips depicting simple actions involving multiple objects, together with corresponding natural language commands that would elicit these actions. Individual objects are first detected, together with quantitative measurements of their colour, shape, location and motion. Visual concepts emerge from the co-occurrence of regions within a measurement space and words of the language. The method is evaluated on a set of videos generated automatically using computer graphics from a database of initial and goal configurations of objects. Each video is annotated with multiple commands in natural language obtained from human annotators using crowd sourcing

    Joint Tracking and Event Analysis for Carried Object Detection

    Get PDF
    This paper proposes a novel method for jointly estimating the track of a moving object and the events in which it participates. The method is intended for dealing with generic objects that are hard to localise and track with the performance of current detection algorithms - our focus is on events involving carried objects. The tracks for other objects with which the target object interacts (e.g. the carrying person) are assumed to be given. The method is posed as maximisation of a posterior probability defined over event sequences and temporally-disjoint subsets of the tracklets from an earlier tracking process. The probability function is a Hidden Markov Model coupled with a term that penalises non-smooth tracks and large gaps in the observed data. We evaluate the method using tracklets output by three state of the art trackers on the new created MINDSEYE2015 dataset and demonstrate improved performance
    corecore