4,364 research outputs found

    REBA: A Refinement-Based Architecture for Knowledge Representation and Reasoning in Robotics

    Get PDF
    This paper describes an architecture for robots that combines the complementary strengths of probabilistic graphical models and declarative programming to represent and reason with logic-based and probabilistic descriptions of uncertainty and domain knowledge. An action language is extended to support non-boolean fluents and non-deterministic causal laws. This action language is used to describe tightly-coupled transition diagrams at two levels of granularity, with a fine-resolution transition diagram defined as a refinement of a coarse-resolution transition diagram of the domain. The coarse-resolution system description, and a history that includes (prioritized) defaults, are translated into an Answer Set Prolog (ASP) program. For any given goal, inference in the ASP program provides a plan of abstract actions. To implement each such abstract action, the robot automatically zooms to the part of the fine-resolution transition diagram relevant to this action. A probabilistic representation of the uncertainty in sensing and actuation is then included in this zoomed fine-resolution system description, and used to construct a partially observable Markov decision process (POMDP). The policy obtained by solving the POMDP is invoked repeatedly to implement the abstract action as a sequence of concrete actions, with the corresponding observations being recorded in the coarse-resolution history and used for subsequent reasoning. The architecture is evaluated in simulation and on a mobile robot moving objects in an indoor domain, to show that it supports reasoning with violation of defaults, noisy observations and unreliable actions, in complex domains.Comment: 72 pages, 14 figure

    Sensory Competition in the Face Processing Areas of the Human Brain

    Get PDF
    The concurrent presentation of multiple stimuli in the visual field may trigger mutually suppressive interactions throughout the ventral visual stream. While several studies have been performed on sensory competition effects among non-face stimuli relatively little is known about the interactions in the human brain for multiple face stimuli. In the present study we analyzed the neuronal basis of sensory competition in an event-related functional magnetic resonance imaging (fMRI) study using multiple face stimuli. We varied the ratio of faces and phase-noise images within a composite display with a constant number of peripheral stimuli, thereby manipulating the competitive interactions between faces. For contralaterally presented stimuli we observed strong competition effects in the fusiform face area (FFA) bilaterally and in the right lateral occipital area (LOC), but not in the occipital face area (OFA), suggesting their different roles in sensory competition. When we increased the spatial distance among pairs of faces the magnitude of suppressive interactions was reduced in the FFA. Surprisingly, the magnitude of competition depended on the visual hemifield of the stimuli: ipsilateral stimulation reduced the competition effects somewhat in the right LOC while it increased them in the left LOC. This suggests a left hemifield dominance of sensory competition. Our results support the sensory competition theory in the processing of multiple faces and suggests that sensory competition occurs in several cortical areas in both cerebral hemispheres

    Location representations of objects in cluttered scenes in the human brain

    Get PDF
    When we perceive a visual scene, we usually see an arrangement of multiple cluttered and partly overlapping objects, like a park with trees and people in it. Spatial attention helps us to prioritize relevant portions of such scenes to efficiently interact with our environments. In previous experiments on object recognition, objects were often presented in isolation, and these studies found that the location of objects is encoded early in time (before ∼150 ms) and in early visual cortex or in the dorsal stream. However, in real life objects rarely appear in isolation but are instead embedded in cluttered scenes. Encoding the location of an object in clutter might require fundamentally different neural computations. Therefore this dissertation addressed the question of how location representations of objects on cluttered backgrounds are encoded in the human brain. To answer this question, we investigated where in cortical space and when in neural processing time location representations emerge when objects are presented on cluttered backgrounds and which role spatial attention plays for the encoding of object location. We addressed these questions in two studies, both including fMRI and EEG experiments. The results of the first study showed that location representations of objects on cluttered backgrounds emerge along the ventral visual stream, peaking in region LOC with a temporal delay that was linked to recurrent processing. The second study showed that spatial attention modulated those location representations in mid- and high-level regions along the ventral stream and late in time (after ∼150 ms), independently of whether backgrounds were cluttered or not. These findings show that location representations emerge during late stages of processing both in cortical space and in neural processing time when objects are presented on cluttered backgrounds and that they are enhanced by spatial attention. Our results provide a new perspective on visual information processing in the ventral visual stream and on the temporal dynamics of location processing. Finally, we discuss how shared neural substrates of location and category representations in the brain might improve object recognition for real-world vision

    Object Representations for Multiple Visual Categories Overlap in Lateral Occipital and Medial Fusiform Cortex

    Get PDF
    How representations of visual objects are maintained across changes in viewpoint is a central issue in visual perception. Whether neural processes underlying view-invariant recognition involve distinct subregions within extrastriate visual cortex for distinct categories of visual objects remains unresolved. We used event-related functional magnetic resonance imaging in 16 healthy volunteers to map visual cortical areas responding to a large set (156) of exemplars from 3 object categories (faces, houses, and chairs), each repeated once after a variable time lag (3-7 intervening stimuli). Exemplars were repeated with the same viewpoint (but different retinal size) or with different viewpoint and size. The task was kept constant across object categories (judging items as "young” vs. "old”). We identified object-selective adaptation effects by comparing neural responses to the first presentation versus repetition of each individual exemplar. We found that exemplar-specific adaptation effects partly overlapped with regions showing category-selective responses (as identified using a separate localizer scan). These included the lateral fusiform gyrus (FG) for faces, parahippocampal gyrus for houses, and lateral occipital complex (LOC) for chairs. In face-selective fusiform gyrus (FG), adaptation effects occurred only for faces repeated with the same viewpoint, but not with a different viewpoint, confirming previous studies using faces only. By contrast, a region in right medial FG, adjacent to but nonoverlapping with the more lateral and face-selective FG, showed repetition effects for faces and to a lesser extent for other objects, regardless of changes in viewpoint or in retinal image-size. Category- and viewpoint-independent repetition effects were also found in bilateral LOC. Our results reveal a common neural substrate in bilateral LOC and right medial FG underlying view-invariant and category-independent recognition for multiple object identities, with only a relative preference for faces in medial FG but no selectivity in LO
    corecore