917 research outputs found
Task-set switching with natural scenes: Measuring the cost of deploying top-down attention
In many everyday situations, we bias our perception from the top down, based on a task or an agenda. Frequently, this entails shifting attention to a specific attribute of a particular object or scene. To explore the cost of shifting top-down attention to a different stimulus attribute, we adopt the task-set switching paradigm, in which switch trials are contrasted with repeat trials in mixed-task blocks and with single-task blocks. Using two tasks that relate to the content of a natural scene in a gray-level photograph and two tasks that relate to the color of the frame around the image, we were able to distinguish switch costs with and without shifts of attention. We found a significant cost in reaction time of 23–31 ms for switches that require shifting attention to other stimulus attributes, but no significant switch cost for switching the task set within an attribute. We conclude that deploying top-down attention to a different attribute incurs a significant cost in reaction time, but that biasing to a different feature value within the same stimulus attribute is effortless
Towards Understanding Reasoning Complexity in Practice
Although the computational complexity of the logic underlying the standard OWL 2 for the Web Ontology Language (OWL) appears discouraging for real applications, several contributions have shown that reasoning with OWL ontologies is feasible in practice. It turns out that reasoning in practice is often far less complex than is suggested by the established theoretical complexity bound, which reflects the worstcase scenario. State-of-the reasoners like FACT++, HERMIT, PELLET and RACER have demonstrated that, even with fairly expressive fragments of OWL 2, acceptable performances can be achieved. However, it is still not well understood why reasoning is feasible in practice and it is rather unclear how to study this problem. In this paper, we suggest first steps that in our opinion could lead to a better understanding of practical complexity. We also provide and discuss some initial empirical results with HERMIT on prominent ontologie
ChlamyCyc - a comprehensive database and web-portal centered on _Chlamydomonas reinhardtii_
*Background* - The unicellular green alga _Chlamydomonas reinhardtii_ is an important eukaryotic model organism for the study of photosynthesis and growth, as well as flagella development and other cellular processes. In the era of high-throughput technologies there is an imperative need to integrate large-scale data sets from high-throughput experimental techniques using computational methods and database resources to provide comprehensive information about the whole cellular system of a single organism.
*Results* - In the framework of the German Systems Biology initiative GoFORSYS a pathway/genome database and web-portal for _Chlamydomonas reinhardtii_ (ChlamyCyc) was established, which currently features about 270 metabolic pathways with related genes, enzymes, and compound information. ChlamyCyc was assembled using an integrative approach combining the recently published genome sequence, bioinformatics methods, and experimental data from metabolomics and proteomics experiments. We analyzed and integrated a combination of primary and secondary database resources, such as existing genome annotations from JGI, EST collections, orthology information, and MapMan classification.
*Conclusion* - Chlamycyc provides a curated and integrated systems biology repository that will enable and assist in systematic studies of fundamental cellular processes in _Chlamydomonas reinhardtii_. The ChlamyCyc database and web-portal is freely available under http://chlamycyc.mpimp-golm.mpg.de
Quantitative temporal logics over the reals: PSpace and below
AbstractIn many cases, the addition of metric operators to qualitative temporal logics (TLs) increases the complexity of satisfiability by at least one exponential: while common qualitative TLs are complete for NP or PSpace, their metric extensions are often ExpSpace-complete or even undecidable. In this paper, we exhibit several metric extensions of qualitative TLs of the real line that are at most PSpace-complete, and analyze the transition from NP to PSpace for such logics. Our first result is that the logic obtained by extending since-until logic of the real line with the operators ‘sometime within n time units in the past/future’ is still PSpace-complete. In contrast to existing results, we also capture the case where n is coded in binary and the finite variability assumption is not made. To establish containment in PSpace, we use a novel reduction technique that can also be used to prove tight upper complexity bounds for many other metric TLs in which the numerical parameters to metric operators are coded in binary. We then consider metric TLs of the reals that do not offer any qualitative temporal operators. In such languages, the complexity turns out to depend on whether binary or unary coding of parameters is assumed: satisfiability is still PSpace-complete under binary coding, but only NP-complete under unary coding
Saliency on a chip: a digital approach with an FPGA
Selective-visual-attention algorithms have
been successfully implemented in analog
VLSI circuits.1 However, in addition to
the usual issues of analog VLSI—such as
the need to fi ne-tune a large number of biases—
these implementations lack the spatial
resolution and pre-processing capabilities
to be truly useful for image-processing
applications. Here we take an alternative
approach and implement a neuro-mimetic
algorithm for selective visual attention in
digital hardware
Modeling feature sharing between object detection and top-down attention
Visual search and other attentionally demanding processes are often guided from the top down when a specific task is given (e.g. Wolfe et al. Vision Research 44, 2004). In the simplified stimuli commonly used in visual search experiments, e.g. red and horizontal bars, the selection of potential features that might be biased for is obvious (by design). In a natural setting with real-world objects, the selection of these features is not obvious, and there is some debate which features can be used for top-down guidance, and how a specific task maps to them (Wolfe and Horowitz, Nat. Rev. Neurosci. 2004).
Learning to detect objects provides the visual system with an effective set of features suitable for the detection task, and with a mapping from these features to an abstract representation of the object.
We suggest a model, in which V4-type features are shared between object detection and top-down attention. As the model familiarizes itself with objects, i.e. it learns to detect them, it acquires a representation for features to solve the detection task. We propose that by cortical feedback connections, top-down processes can re-use these same features to bias attention to locations with higher probability of containing the target object. We propose a model architecture that allows for such processing, and we present a computational implementation of the model that performs visual search in natural scenes for a given object category, e.g. for faces. We compare the performance of our model to pure bottom-up selection as well as to top-down attention using simple features such as hue
Is bottom-up attention useful for object recognition?
A key problem in learning multiple objects from unlabeled
images is that it is a priori impossible to tell which
part of the image corresponds to each individual object,
and which part is irrelevant clutter which is not associated
to the objects. We investigate empirically to what extent
pure bottom-up attention can extract useful information
about the location, size and shape of objects from images
and demonstrate how this information can be utilized
to enable unsupervised learning of objects from unlabeled
images. Our experiments demonstrate that the proposed approach to using bottom-up attention is indeed useful for a
variety of applications
- …
