21 research outputs found

    Goals and means in action observation : a computational approach

    Get PDF
    Many of our daily activities are supported by behavioural goals that guide the selection of actions, which allow us to reach these goals effectively. Goals are considered to be important for action observation since they allow the observer to copy the goal of the action without the need to use the exact same means. The importance of being able to use different action means becomes evident when the observer and observed actor have different bodies (robots and humans) or bodily measurements (parents and children), or when the environments of actor and observer differ substantially (when an obstacle is present or absent in either environment). A selective focus on the action goals instead of the action means furthermore circumvents the need to consider the vantage point of the actor, which is consistent with recent findings that people prefer to represent the actions of others from their own individual perspective. In this paper, we use a computational approach to investigate how knowledge about action goals and means are used in action observation. We hypothesise that in action observation human agents are primarily interested in identifying the goals of the observed actor’s behaviour. Behavioural cues (e.g. the way an object is grasped) may help to disambiguate the goal of the actor (e.g. whether a cup is grasped for drinking or handing it over). Recent advances in cognitive neuroscience are cited in support of the model’s architecture

    Characterization theorems in random utility theory

    No full text
    Item does not contain fulltextTo explain inconsistency in choice experiments, where a subject on repeated presentations of one particular subset of alternatives does not always select the same alternative, random utility theory models the subject's evaluation of a stimulus by a random variable sampled at each presentation of the stimulus. The problem addressed in this entry is the characterization of random utility theory in its most general form (i.e., with an arbitrary joint distribution of the random variables) in terms of the testable restrictions it imposes on the choice data. While for the experimental paradigm where choices are obtained for every subset of alternatives this characterization problem has been solved, it is still open for the case of binary choice probabilities, where just all 2-element subsets are offered. For this case, the problem turns out to be equivalent to finding a linear description of the linear ordering polytope, which constitutes the convex hull of all linear orders of the alternatives, identifying these orders with their indicator functions (0-1 vectors). It is illustrated that many necessary conditions for a random utility representation can be found by applying graph-theoretic techniques, but also that with increasing the number of alternatives there is a combinatorial explosion of such necessary conditions with no apparent structural regularities. The problem of a complete characterization for an arbitrary number of alternatives seems intractable at the moment. Finally it is shown how this characterization problem for binary choice probabilities generalizes to other instances of probabilistic measurement

    Ordinal data analysis: biorder representation and knowledge spaces

    Get PDF
    Contains fulltext : mmubn000001_081787758.pdf (publisher's version ) (Open Access)Promotores : E. Roskam en J. Falmagne cum laude229 p

    Visual stability across combined eye and body motion

    Get PDF
    Contains fulltext : 103072.pdf (publisher's version ) (Open Access)In order to maintain visual stability during self-motion, the brain needs to update any egocentric spatial representations of the environment. Here, we use a novel psychophysical approach to investigate how and to what extent the brain integrates visual, extraocular, and vestibular signals pertaining to this spatial update. Participants were oscillated sideways at a frequency of 0.63 Hz while keeping gaze fixed on a stationary light. When the motion direction changed, a reference target was shown either in front of or behind the fixation point. At the next reversal, half a cycle later, we tested updating of this reference location by asking participants to judge whether a briefly flashed probe was shown to the left or right of the memorized target. We show that updating is not only biased, but that the direction and magnitude of this bias depend on both gaze and object location, implying that a gaze-centered reference frame is involved. Using geometric modeling, we further show that the gaze-dependent errors can be caused by an underestimation of translation amplitude, by a bias of visually perceived objects towards the fovea (i.e., a foveal bias), or by a combination of both.11 p

    A procedure for the incremental construction of a knowledge space

    No full text
    Item does not contain fulltextKnowledge spaces are structures for the efficient assessment of the knowledge state of a student in a given field of knowledge. Existing procedures for constructing a knowledge space by querying an expert assume that the domain of questions is known in advance, and that it is fixed during the whole query process. The outcome of these procedures is a knowledge space on the questions in that domain. If the original domain is extended with new questions, a new knowledge space on the extended domain can be produced by expert query. Since in this case a knowledge space for the original domain already exists, the available information can be used to extend the existing space in an efficient way, thus avoiding to apply expert query from scratch. Existing procedures do not provide an explicit way to use such information. Although these procedures can be adapted to this purpose, in this paper a new query algorithm that is specifically tailored for the problem above mentioned is presented.13 p

    Reliability-based weighting of visual and vestibular cues in displacement estimation

    Get PDF
    Contains fulltext : 150540.pdf (publisher's version ) (Open Access)When navigating through the environment, our brain needs to infer how far we move and in which direction we are heading. In this estimation process, the brain may rely on multiple sensory modalities, including the visual and vestibular systems. Previous research has mainly focused on heading estimation, showing that sensory cues are combined by weighting them in proportion to their reliability, consistent with statistically optimal integration. But while heading estimation could improve with the ongoing motion, due to the constant flow of information, the estimate of how far we move requires the integration of sensory information across the whole displacement. In this study, we investigate whether the brain optimally combines visual and vestibular information during a displacement estimation task, even if their reliability varies from trial to trial. Participants were seated on a linear sled, immersed in a stereoscopic virtual reality environment. They were subjected to a passive linear motion involving visual and vestibular cues with different levels of visual coherence to change relative cue reliability and with cue discrepancies to test relative cue weighting. Participants performed a two-interval two-alternative forced-choice task, indicating which of two sequentially perceived displacements was larger. Our results show that humans adapt their weighting of visual and vestibular information from trial to trial in proportion to their reliability. These results provide evidence that humans optimally integrate visual and vestibular information in order to estimate their body displacement.15 p

    Causal inference for spatial constancy across saccades

    Get PDF
    Contains fulltext : 157148.pdf (publisher's version ) (Open Access)During saccadic eye movements, the image on our retinas is, contrary to subjective experience, highly unstable. This study examines how the brain distinguishes the image perturbations caused by saccades and those due to changes in the visual scene. We first show that participants made severe errors in judging the presaccadic location of an object that shifts during a saccade. We then show that these observations can be modeled based on causal inference principles, evaluating whether presaccadic and postsaccadic object percepts derive from a single stable object or not. On a single trial level, this evaluation is not ?either/or? but a probability that also determines the weight by which pre- and postsaccadic signals are separated and integrated in judging object locations across saccades.20 p

    Psychophysical evaluation of sensory reweighting in bilateral vestibulopathy

    Get PDF
    Contains fulltext : 191533.pdf (publisher's version ) (Open Access)Perception of spatial orientation is thought to rely on the brain's integration of visual, vestibular, proprioceptive and somatosensory signals, as well as internal beliefs. When one of these signals breaks down, such as the vestibular signal in bilateral vestibulopathy, patients start compensating by relying more on the remaining cues. How these signals are reweighted in this integration process is difficult to establish since they cannot be measured in isolation during natural tasks, are inherently noisy, and can be ambiguous or in conflict. Here, we review our recent work, combining experimental psychophysics with a reverse engineering approach, based on Bayesian inference principles, to quantify sensory noise levels and optimal (re)weighting at the individual subject level, in both patients with bilateral vestibular deficits and healthy controls. We show that these patients reweight the remaining sensory information, relying more on visual and other non-vestibular information than healthy controls in the perception of spatial orientation. This quantification approach could improve diagnostics and prognostics of multisensory integration deficits in vestibular patients, and contribute to an evaluation of rehabilitation therapies directed towards specific training programs.9 p

    Characterization theorems in random utility theory

    No full text
    To explain inconsistency in choice experiments, where a subject on repeated presentations of one particular subset of alternatives does not always select the same alternative, random utility theory models the subject's evaluation of a stimulus by a random variable sampled at each presentation of the stimulus. The problem addressed in this entry is the characterization of random utility theory in its most general form (i.e., with an arbitrary joint distribution of the random variables) in terms of the testable restrictions it imposes on the choice data. While for the experimental paradigm where choices are obtained for every subset of alternatives this characterization problem has been solved, it is still open for the case of binary choice probabilities, where just all 2-element subsets are offered. For this case, the problem turns out to be equivalent to finding a linear description of the linear ordering polytope, which constitutes the convex hull of all linear orders of the alternatives, identifying these orders with their indicator functions (0-1 vectors). It is illustrated that many necessary conditions for a random utility representation can be found by applying graph-theoretic techniques, but also that with increasing the number of alternatives there is a combinatorial explosion of such necessary conditions with no apparent structural regularities. The problem of a complete characterization for an arbitrary number of alternatives seems intractable at the moment. Finally it is shown how this characterization problem for binary choice probabilities generalizes to other instances of probabilistic measurement

    Causal inference in the updating and weighting of allocentric and egocentric information for spatial constancy during whole-body motion

    No full text
    It has been reported that the brain combines egocentric and allocentric information to update object positions after an intervening movement. Studies typically use discrete updating tasks (i.e., comparing pre- to post-movement target representations). Such approaches, however, cannot reveal how the brain would weigh the information in these reference frames during the intervening motion. A reasonable assumption is that objects with stable position over time would be more likely to be considered as a reliable allocentric landmark. But inferring whether an object is stable in space while the observer is moving involves attributing perceived changes in location to either the object's or the observer's displacement. Here, we tested this causal inference hypothesis by designing a continuous whole-body motion updating task. At the beginning of a trial, a target was presented for 500 ms, within a large visual frame. As soon as the target disappeared, participants were asked to move a cursor to this location by controlling a linear-guide mounted on the vestibular sled on which they were seated. Participants were translated sideways as soon as their reaching movement started, and they had to maintain the cursor on the remembered target location in space while being moved. During the sled motion, the frame would move with a velocity proportional to that of the sled (gain ranging from -0.7 to 0.7). Participants' responses showed a systematic bias in the direction of the frame displacement, one that increased with the difference between the frame and the sled velocity for small differences, but was decreasing for large differences. This bias pattern provides evidence for humans exploiting a dynamic Bayesian inference process with two causal structures to mediate the dynamic integration of allocentric and egocentric information in spatial updating. Meeting abstract presented at VSS 2017
    corecore