55,190 research outputs found

    Visual Learning in Multiple-Object Tracking

    Get PDF
    Tracking moving objects in space is important for the maintenance of spatiotemporal continuity in everyday visual tasks. In the laboratory, this ability is tested using the Multiple Object Tracking (MOT) task, where participants track a subset of moving objects with attention over an extended period of time. The ability to track multiple objects with attention is severely limited. Recent research has shown that this ability may improve with extensive practice (e.g., from action videogame playing). However, whether tracking also improves in a short training session with repeated trajectories has rarely been investigated. In this study we examine the role of visual learning in multiple-object tracking and characterize how varieties of attention interact with visual learning.Participants first conducted attentive tracking on trials with repeated motion trajectories for a short session. In a transfer phase we used the same motion trajectories but changed the role of tracking targets and nontargets. We found that compared with novel trials, tracking was enhanced only when the target subset was the same as that used during training. Learning did not transfer when the previously trained targets and nontargets switched roles or mixed up. However, learning was not specific to the trained temporal order as it transferred to trials where the motion was played backwards.These findings suggest that a demanding task of tracking multiple objects can benefit from learning of repeated motion trajectories. Such learning potentially facilitates tracking in natural vision, although learning is largely confined to the trajectories of attended objects. Furthermore, we showed that learning in attentive tracking relies on relational coding of all target trajectories. Surprisingly, learning was not specific to the trained temporal context, probably because observers have learned motion paths of each trajectory independently of the exact temporal order

    A New Technique for Studying Implicit Relational Learning in Adult Humans : Multiple-Object Tracking Task

    Get PDF
    Advisor: Olga LazarevaAdult humans readily learn to respond to relations, but it is normally assumed that their ability to verbalize relations plays a critical role. To study relational learning in absence of verbalization, we developed a new technique using a multiple-object tracking task. In this task, participants are told to track four out of eight objects cued at the beginning of the trial. At the end of the trial, a single object is cued, and participants respond whether they tracked it (yes/no task). The display contained two strips of different width but participants were not informed about their presence. The participants were randomly assigned to Informative and Random conditions. In Informative condition, the location of object cued at the end of the trial predicted the correct response. If the answer was "yes", then the cued object was located next to the narrower strip; otherwise, it was located next to the wider strip (or vice versa). In Random condition, the cued object was located next to either strip, so that its location was not predictive of the correct answer. Postexperimental questionnaire showed that participants in Informed condition were not aware of predictive role of object location; nonetheless, they were more accurate than participants in random condition, providing evidence of implicit relational learning in this new experimental paradigm. Our results suggest that ability to verbalize relations may not be essential for demonstrating relational learning.Drake University, Department of Psycholog

    From images via symbols to contexts: using augmented reality for interactive model acquisition

    Get PDF
    Systems that perform in real environments need to bind the internal state to externally perceived objects, events, or complete scenes. How to learn this correspondence has been a long standing problem in computer vision as well as artificial intelligence. Augmented Reality provides an interesting perspective on this problem because a human user can directly relate displayed system results to real environments. In the following we present a system that is able to bootstrap internal models from user-system interactions. Starting from pictorial representations it learns symbolic object labels that provide the basis for storing observed episodes. In a second step, more complex relational information is extracted from stored episodes that enables the system to react on specific scene contexts

    Stochastic Prediction of Multi-Agent Interactions from Partial Observations

    Full text link
    We present a method that learns to integrate temporal information, from a learned dynamics model, with ambiguous visual information, from a learned vision model, in the context of interacting agents. Our method is based on a graph-structured variational recurrent neural network (Graph-VRNN), which is trained end-to-end to infer the current state of the (partially observed) world, as well as to forecast future states. We show that our method outperforms various baselines on two sports datasets, one based on real basketball trajectories, and one generated by a soccer game engine.Comment: ICLR 2019 camera read
    corecore