228,484 research outputs found

    Visual working memory contents bias ambiguous structure from motion perception

    Get PDF
    The way we perceive the visual world depends crucially on the state of the observer. In the present study we show that what we are holding in working memory (WM) can bias the way we perceive ambiguous structure from motion stimuli. Holding in memory the percept of an unambiguously rotating sphere influenced the perceived direction of motion of an ambiguously rotating sphere presented shortly thereafter. In particular, we found a systematic difference between congruent dominance periods where the perceived direction of the ambiguous stimulus corresponded to the direction of the unambiguous one and incongruent dominance periods. Congruent dominance periods were more frequent when participants memorized the speed of the unambiguous sphere for delayed discrimination than when they performed an immediate judgment on a change in its speed. The analysis of dominance time-course showed that a sustained tendency to perceive the same direction of motion as the prior stimulus emerged only in the WM condition, whereas in the attention condition perceptual dominance dropped to chance levels at the end of the trial. The results are explained in terms of a direct involvement of early visual areas in the active representation of visual motion in WM

    Visualization as a Means of Influence (on the Example of Student Periodicals)

    Get PDF
    Visualization is a modern tool of communication with media perceptive readers. Visual means allow to influence the emotional sphere of the reader, replace certain information and the number of characters, diversify the compositional design of publications and create their aesthetics, which affects the image and reputation of periodicals. Visualization is a tool that is actively used in student periodicals, the target audience of which is modern youth with clip thinking. In order to attract the attention of such readers, to intensify their attention, the editors of student publications diversify the content with the help of a set of visual aids. In particular, typical forms of visualization in publications are photographs, drawings, pictures, symbols, comics, puzzles, tables, graphics, variation of page space with fonts, colors, decorative lines. The article analyzes the content of student magazines in Ukraine and Poland on the means of visualization and states the effectiveness of visual communication for these publications. As a result of research it is established that visualization has specificity of the influence (complexity and systematics). Determining criteria for the effectiveness of visual communication of student periodicals are visual activity, the degree of cognitive perception, compositional organization, supragraphemics, topographemics, non-pictographic elements, text positioning

    Estimation of Driver's Gaze Region from Head Position and Orientation using Probabilistic Confidence Regions

    Full text link
    A smart vehicle should be able to understand human behavior and predict their actions to avoid hazardous situations. Specific traits in human behavior can be automatically predicted, which can help the vehicle make decisions, increasing safety. One of the most important aspects pertaining to the driving task is the driver's visual attention. Predicting the driver's visual attention can help a vehicle understand the awareness state of the driver, providing important contextual information. While estimating the exact gaze direction is difficult in the car environment, a coarse estimation of the visual attention can be obtained by tracking the position and orientation of the head. Since the relation between head pose and gaze direction is not one-to-one, this paper proposes a formulation based on probabilistic models to create salient regions describing the visual attention of the driver. The area of the predicted region is small when the model has high confidence on the prediction, which is directly learned from the data. We use Gaussian process regression (GPR) to implement the framework, comparing the performance with different regression formulations such as linear regression and neural network based methods. We evaluate these frameworks by studying the tradeoff between spatial resolution and accuracy of the probability map using naturalistic recordings collected with the UTDrive platform. We observe that the GPR method produces the best result creating accurate predictions with localized salient regions. For example, the 95% confidence region is defined by an area that covers 3.77% region of a sphere surrounding the driver.Comment: 13 Pages, 12 figures, 2 table

    The Impact of Expertise in Archery on the Attentional Biases of Perceptual and Representational Pseudoneglect

    Get PDF
    Turnbull & McGeorge (1998) asked a group of participants if they had bumped into anything recently and if so, on what side? Results reflected a trend towards bumping on the right. This tendency to bump into objects on the right has since been observed in a naturalistic setting (Nicholls, Loftus, Meyer, & Mattingley, 2007). But rather than an interesting quirk of statistics these studies, and many others have captured a phenomenon called pseudoneglect (Bowers and Heilman, 1980). It represents a subtle yet consistent bias in our spatial attention towards the left half of space and away from the right which results in the pattern of bumping or other lateralised errors seen in the spatial attention literature (See Jewell & McCourt, 2000). Furthermore, this bias does not just impact the perceptual sphere; it also crosses into the representational, impacting our memory for visual information (Bisiach & Luzatti, 1987). But whether this is consistent in individuals who are trained in an accuracy based sport remains unknown. The current research sought to examine perceptual and representational pseudoneglect effects in a group of expert archers compared to neurologically healthy controls. Results suggest that the attainment of expert level in archery is associated with reduced perceptual pseudoneglect. Archers showed a trend towards reduced representational pseudoneglect but this was non-significant. Results are discussed in line with theoretical frameworks of visual attention, pseudoneglect and expertise

    Visual activism and social justice: using visual methods to make young people’s complex lives visible across ‘public’ and ‘private’ spaces

    Get PDF
    Much critical social justice research, including work employing visual methods, focuses on young people’s use of public spaces leaving domestic spaces relatively unexplored. Such research tacitly maintains modernist notions of the public/private distinction in which the private sphere is considered less relevant to concerns of social justice. However, UK crime and social justice policy has increasingly intervened in the home lives of the poorest British families. Further, such policies have been legitimated by drawing on (or not contesting) media imagery that constructs these family lives almost entirely negatively, obscuring their complexity. Drawing on childhood studies research, and a project that employed visual methods to explore belonging among young people in foster, kinship or residential care, this paper examines participants’ often fragile efforts to find or forge places in which they could feel at ‘home’ and imagine a future. In so doing, it invites visual activists to reconsider their understanding of public and private spaces in order to contest prevalent unsympathetic policy representations of poorer young people’s lives, to focus greater attention on their need for support, and to extend imaginations of their futures

    Object-based selection of irrelevant features is not confined to the attended object

    Get PDF
    Attention to one feature of an object can bias the processing of unattended features of that object. Here we demonstrate with ERPs in visual search that this object-based bias for an irrelevant feature also appears in an unattended object when it shares that feature with the target object. Specifically, we show that the ERP response elicited by a distractor object in one visual field is modulated as a function of whether a task-irrelevant color of that distractor is also present in the target object that is presented in the opposite visual field. Importantly, we find this modulation to arise with a delay of approximately 80 msec relative to the N2pc-a component of the ERP response that reflects the focusing of attention onto the target. In a second experiment, we demonstrate that this modulation reflects enhanced neural processing in the unattended object. These observations together facilitate the surprising conclusion that the object-based selection of irrelevant features is spatially global even after attention has selected the target object

    Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning

    Full text link
    Visual question answering requires high-order reasoning about an image, which is a fundamental capability needed by machine systems to follow complex directives. Recently, modular networks have been shown to be an effective framework for performing visual reasoning tasks. While modular networks were initially designed with a degree of model transparency, their performance on complex visual reasoning benchmarks was lacking. Current state-of-the-art approaches do not provide an effective mechanism for understanding the reasoning process. In this paper, we close the performance gap between interpretable models and state-of-the-art visual reasoning methods. We propose a set of visual-reasoning primitives which, when composed, manifest as a model capable of performing complex reasoning tasks in an explicitly-interpretable manner. The fidelity and interpretability of the primitives' outputs enable an unparalleled ability to diagnose the strengths and weaknesses of the resulting model. Critically, we show that these primitives are highly performant, achieving state-of-the-art accuracy of 99.1% on the CLEVR dataset. We also show that our model is able to effectively learn generalized representations when provided a small amount of data containing novel object attributes. Using the CoGenT generalization task, we show more than a 20 percentage point improvement over the current state of the art.Comment: CVPR 2018 pre-prin

    Haptic induced motor learning and the extension of its benefits to stroke patients

    Get PDF
    In this research, the Haptic Master robotic arm and virtual environments are used to induce motor learning in subjects with no known musculoskeletal or neurological disorders. It is found in this research that both perception and performance of the subject are increased through the haptic and visual feedback delivered through the Haptic Master. These system benefits may be extended to enhance therapies for patients with loss of motor skills due to neurological disease or brain injury. Force and visual feedback were manipulated within virtual environment scenarios to facilitate learning. In one force feedback condition, the subject is required to maneuver a sphere through a haptic maze or linear channel. In the second feedback condition, the subject\u27s movement was stopped when the sphere came in contact with the haptic walls. To resume movement, the force vector had to be redirected towards the optimal trajectory. To analyze the efficiency of the various scenarios, the area between the optimal and actual trajectories was used as a measure of learning. The results from this research demonstrated that within more complex environments one type of force feedback was more successful in facilitating motor learning. In a simpler environment, two out of three subjects experienced a higher degree of motor learning with the same type of force feedback. Learning is not enhanced with the presence of visual feedback. Also, in nearly all studied cases, the primary limitation to learning is shoulder and attention fatigue brought on by the experimentation

    The Neuro-Symbolic Concept Learner: Interpreting Scenes, Words, and Sentences From Natural Supervision

    Full text link
    We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model learns by simply looking at images and reading paired questions and answers. Our model builds an object-based scene representation and translates sentences into executable, symbolic programs. To bridge the learning of two modules, we use a neuro-symbolic reasoning module that executes these programs on the latent scene representation. Analogical to human concept learning, the perception module learns visual concepts based on the language description of the object being referred to. Meanwhile, the learned visual concepts facilitate learning new words and parsing new sentences. We use curriculum learning to guide the searching over the large compositional space of images and language. Extensive experiments demonstrate the accuracy and efficiency of our model on learning visual concepts, word representations, and semantic parsing of sentences. Further, our method allows easy generalization to new object attributes, compositions, language concepts, scenes and questions, and even new program domains. It also empowers applications including visual question answering and bidirectional image-text retrieval.Comment: ICLR 2019 (Oral). Project page: http://nscl.csail.mit.edu
    • …
    corecore