International audienceFor primates (including humans), interacting with objects of interest in the environment often involves their foveation, many of them not being static (e.g. other animals, relative motion due to self-induced movement). Eye movements allow the active and continuous sampling of local information, exploiting the graded precision of visual signals (e.g., due to the types and distributions of photoreceptors). Foveating and tracking targets thus requires adapting to their motion. Indeed, considering the delays involved in the transmission of retinal signals to the eye muscles, a purely reactive schema could not account for the smooth pursuit movements which maintain the target within the central visual field. Internal models have been posited to represent the future position of the target (for instance extrapolating from past observations), in order to compensate for these delays. Yet, adaptation of the sensorimotor and neural activity may be sufficient to synchronize with the movement of the target, converging to encoding its location here-and-now, without explicitly resorting to any frame of reference (Goffart et al., 2017).Committing to a distributed dynamical systems approach, we relied on a computational implementation of neural fields to model an adaptation mechanism sufficient to select, focus and track rapidly moving targets. By coupling the generation of eye-movements with dynamic neural field models and a simple learning rule, we replicated neurophysiological results that demonstrated how the monkey adapts to repeatedly observed moving targets (Bourrelly et al., 2016; Quinton & Goffart, 2018), progressively reducing the number of catch-up saccades and increasing smooth pursuit velocity (yet not going beyond the here-and-now target location). We now focus on eye-movements observed in presence of two simultaneously moving centrifugal targets (Goffart, 2016), for which the reduction to a single trajectory with some predicted dynamics (e.g., target center) is even more inappropriate