4,364 research outputs found
REBA: A Refinement-Based Architecture for Knowledge Representation and Reasoning in Robotics
This paper describes an architecture for robots that combines the
complementary strengths of probabilistic graphical models and declarative
programming to represent and reason with logic-based and probabilistic
descriptions of uncertainty and domain knowledge. An action language is
extended to support non-boolean fluents and non-deterministic causal laws. This
action language is used to describe tightly-coupled transition diagrams at two
levels of granularity, with a fine-resolution transition diagram defined as a
refinement of a coarse-resolution transition diagram of the domain. The
coarse-resolution system description, and a history that includes (prioritized)
defaults, are translated into an Answer Set Prolog (ASP) program. For any given
goal, inference in the ASP program provides a plan of abstract actions. To
implement each such abstract action, the robot automatically zooms to the part
of the fine-resolution transition diagram relevant to this action. A
probabilistic representation of the uncertainty in sensing and actuation is
then included in this zoomed fine-resolution system description, and used to
construct a partially observable Markov decision process (POMDP). The policy
obtained by solving the POMDP is invoked repeatedly to implement the abstract
action as a sequence of concrete actions, with the corresponding observations
being recorded in the coarse-resolution history and used for subsequent
reasoning. The architecture is evaluated in simulation and on a mobile robot
moving objects in an indoor domain, to show that it supports reasoning with
violation of defaults, noisy observations and unreliable actions, in complex
domains.Comment: 72 pages, 14 figure
Recommended from our members
The processing of color preference in the brain
Decades of research has established that humans have preferences for some colors (e.g., blue) and a dislike of others (e.g., dark chartreuse), with preference varying systematically with variation in hue (e.g., Hurlbert & Owen, 2015). Here, we used functional MRI to investigate why humans have likes and dislikes for simple patches of color, and to understand the neural basis of preference, aesthetics and value judgements more generally. We looked for correlations of a behavioural measure of color preference with the blood oxygen level-dependent (BOLD) response when participants performed an irrelevant orientation judgement task on colored squares. A whole brain analysis found a significant correlation between BOLD activity and color preference in the posterior midline cortex (PMC), centred on the precuneus but extending into the adjacent posterior cingulate and cuneus. These results demonstrate that brain activity is modulated by color preference, even when such preferences are irrelevant to the ongoing task the participants are engaged. They also suggest that color preferences automatically influence our processing of the visual world. Interestingly, the effect in the PMC overlaps with regions identified in neuroimaging studies of preference and value judgements of other types of stimuli. Therefore, our findings extends this literature to show that the PMC is related to automatic encoding of subjective value even for basic visual features such as color
Sensory Competition in the Face Processing Areas of the Human Brain
The concurrent presentation of multiple stimuli in the visual field may trigger mutually suppressive interactions throughout the ventral visual stream. While several studies have been performed on sensory competition effects among non-face stimuli relatively little is known about the interactions in the human brain for multiple face stimuli. In the present study we analyzed the neuronal basis of sensory competition in an event-related functional magnetic resonance imaging (fMRI) study using multiple face stimuli. We varied the ratio of faces and phase-noise images within a composite display with a constant number of peripheral stimuli, thereby manipulating the competitive interactions between faces. For contralaterally presented stimuli we observed strong competition effects in the fusiform face area (FFA) bilaterally and in the right lateral occipital area (LOC), but not in the occipital face area (OFA), suggesting their different roles in sensory competition. When we increased the spatial distance among pairs of faces the magnitude of suppressive interactions was reduced in the FFA. Surprisingly, the magnitude of competition depended on the visual hemifield of the stimuli: ipsilateral stimulation reduced the competition effects somewhat in the right LOC while it increased them in the left LOC. This suggests a left hemifield dominance of sensory competition. Our results support the sensory competition theory in the processing of multiple faces and suggests that sensory competition occurs in several cortical areas in both cerebral hemispheres
Recommended from our members
The Neural Fate of Task-Irrelevant Features in Object-Based Processing
Objects are one of the most fundamental units in visual attentional selection and information processing. Studies have shown that, during object-based processing, all features of an attended object may be encoded together, even when these features are task irrelevant. Some recent studies, however, have failed to find this effect. What determines when object-based processing may or may not occur? In three experiments, observers were asked to encode object colors and the processing of task-irrelevant object shapes was evaluated by measuring functional magnetic resonance imaging responses from a brain area involved in shape representation. Whereas object-based task-irrelevant shape processing was present at low color-encoding load, it was attenuated or even suppressed at high color-encoding load. Moreover, such object-based processing was short-lived and was not sustained over a long delay period. Object-based processing for task-irrelevant features of attended objects thus does exist, as reported previously; but it is transient and its magnitude is determined by the encoding demand of the task-relevant feature.Psycholog
Recommended from our members
View-Independent Working Memory Representations of Artificial Shapes in Prefrontal and Posterior Regions of the Human Brain
Traditional views of visual working memory postulate that memorized contents are stored in dorsolateral prefrontal cortex using an adaptive and flexible code. In contrast, recent studies proposed that contents are maintained by posterior brain areas using codes akin to perceptual representations. An important question is whether this reflects a difference in the level of abstraction between posterior and prefrontal representations. Here we investigated whether neural representations of visual working memory contents are view-independent, as indicated by rotation-invariance. Using fMRI and multivariate pattern analyses, we show that when subjects memorize complex shapes, both posterior and frontal brain regions maintain the memorized contents using a rotation-invariant code. Importantly, we found the representations in frontal cortex to be localized to the frontal eye fields rather than dorsolateral prefrontal cortices. Thus, our results give evidence for the view-independent storage of complex shapes in distributed representations across posterior and frontal brain regions
Location representations of objects in cluttered scenes in the human brain
When we perceive a visual scene, we usually see an arrangement of multiple cluttered and
partly overlapping objects, like a park with trees and people in it. Spatial attention helps us
to prioritize relevant portions of such scenes to efficiently interact with our environments. In
previous experiments on object recognition, objects were often presented in isolation, and these
studies found that the location of objects is encoded early in time (before ∼150 ms) and in early
visual cortex or in the dorsal stream. However, in real life objects rarely appear in isolation but
are instead embedded in cluttered scenes. Encoding the location of an object in clutter might
require fundamentally different neural computations. Therefore this dissertation addressed the
question of how location representations of objects on cluttered backgrounds are encoded in
the human brain. To answer this question, we investigated where in cortical space and when in
neural processing time location representations emerge when objects are presented on cluttered
backgrounds and which role spatial attention plays for the encoding of object location. We
addressed these questions in two studies, both including fMRI and EEG experiments. The results
of the first study showed that location representations of objects on cluttered backgrounds emerge
along the ventral visual stream, peaking in region LOC with a temporal delay that was linked to
recurrent processing. The second study showed that spatial attention modulated those location
representations in mid- and high-level regions along the ventral stream and late in time (after
∼150 ms), independently of whether backgrounds were cluttered or not. These findings show
that location representations emerge during late stages of processing both in cortical space and
in neural processing time when objects are presented on cluttered backgrounds and that they
are enhanced by spatial attention. Our results provide a new perspective on visual information
processing in the ventral visual stream and on the temporal dynamics of location processing.
Finally, we discuss how shared neural substrates of location and category representations in the
brain might improve object recognition for real-world vision
Object Representations for Multiple Visual Categories Overlap in Lateral Occipital and Medial Fusiform Cortex
How representations of visual objects are maintained across changes in viewpoint is a central issue in visual perception. Whether neural processes underlying view-invariant recognition involve distinct subregions within extrastriate visual cortex for distinct categories of visual objects remains unresolved. We used event-related functional magnetic resonance imaging in 16 healthy volunteers to map visual cortical areas responding to a large set (156) of exemplars from 3 object categories (faces, houses, and chairs), each repeated once after a variable time lag (3-7 intervening stimuli). Exemplars were repeated with the same viewpoint (but different retinal size) or with different viewpoint and size. The task was kept constant across object categories (judging items as "young” vs. "old”). We identified object-selective adaptation effects by comparing neural responses to the first presentation versus repetition of each individual exemplar. We found that exemplar-specific adaptation effects partly overlapped with regions showing category-selective responses (as identified using a separate localizer scan). These included the lateral fusiform gyrus (FG) for faces, parahippocampal gyrus for houses, and lateral occipital complex (LOC) for chairs. In face-selective fusiform gyrus (FG), adaptation effects occurred only for faces repeated with the same viewpoint, but not with a different viewpoint, confirming previous studies using faces only. By contrast, a region in right medial FG, adjacent to but nonoverlapping with the more lateral and face-selective FG, showed repetition effects for faces and to a lesser extent for other objects, regardless of changes in viewpoint or in retinal image-size. Category- and viewpoint-independent repetition effects were also found in bilateral LOC. Our results reveal a common neural substrate in bilateral LOC and right medial FG underlying view-invariant and category-independent recognition for multiple object identities, with only a relative preference for faces in medial FG but no selectivity in LO
- …