10,190 research outputs found

    A perceptual comparison of empirical and predictive region-of-interest video

    Get PDF
    When viewing multimedia presentations, a user only attends to a relatively small part of the video display at any one point in time. By shifting allocation of bandwidth from peripheral areas to those locations where a user’s gaze is more likely to rest, attentive displays can be produced. Attentive displays aim to reduce resource requirements while minimizing negative user perception—understood in this paper as not only a user’s ability to assimilate and understand information but also his/her subjective satisfaction with the video content. This paper introduces and discusses a perceptual comparison between two region-of-interest display (RoID) adaptation techniques. A RoID is an attentive display where bandwidth has been preallocated around measured or highly probable areas of user gaze. In this paper, video content was manipulated using two sources of data: empirical measured data (captured using eye-tracking technology) and predictive data (calculated from the physical characteristics of the video data). Results show that display adaptation causes significant variation in users’ understanding of specific multimedia content. Interestingly, RoID adaptation and the type of video being presented both affect user perception of video quality. Moreover, the use of frame rates less than 15 frames per second, for any video adaptation technique, caused a significant reduction in user perceived quality, suggesting that although users are aware of video quality reduction, it does impact level of information assimilation and understanding. Results also highlight that user level of enjoyment is significantly affected by the type of video yet is not as affected by the quality or type of video adaptation—an interesting implication in the field of entertainment

    The CHREST architecture of cognition : the role of perception in general intelligence

    Get PDF
    Original paper can be found at: http://www.atlantis-press.com/publications/aisr/AGI-10/ Copyright Atlantis Press. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits non-commercial use, distribution and reproduction in any medium, provided the original work is properly cited.This paper argues that the CHREST architecture of cognition can shed important light on developing artificial general intelligence. The key theme is that "cognition is perception." The description of the main components and mechanisms of the architecture is followed by a discussion of several domains where CHREST has already been successfully applied, such as the psychology of expert behaviour, the acquisition of language by children, and the learning of multiple representations in physics. The characteristics of CHREST that enable it to account for empirical data include: self-organisation, an emphasis on cognitive limitations, the presence of a perception-learning cycle, and the use of naturalistic data as input for learning. We argue that some of these characteristics can help shed light on the hard questions facing theorists developing artificial general intelligence, such as intuition, the acquisition and use of concepts and the role of embodiment

    Prediction of Search Targets From Fixations in Open-World Settings

    Full text link
    Previous work on predicting the target of visual search from human fixations only considered closed-world settings in which training labels are available and predictions are performed for a known set of potential targets. In this work we go beyond the state of the art by studying search target prediction in an open-world setting in which we no longer assume that we have fixation data to train for the search targets. We present a dataset containing fixation data of 18 users searching for natural images from three image categories within synthesised image collages of about 80 images. In a closed-world baseline experiment we show that we can predict the correct target image out of a candidate set of five images. We then present a new problem formulation for search target prediction in the open-world setting that is based on learning compatibilities between fixations and potential targets

    Perceptual bias, more than age, impacts on eye movements during face processing

    Get PDF
    Consistent with the right hemispheric dominance for face processing, a left perceptual bias (LPB) is typically demonstrated by younger adults viewing faces and a left eye movement bias has also been revealed. Hemispheric asymmetry is predicted to reduce with age and older adults have demonstrated a weaker LPB, particularly when viewing time is restricted. What is currently unclear is whether age also weakens the left eye movement bias. Additionally, a right perceptual bias (RPB) for facial judgments has less frequently been demonstrated, but whether this is accompanied by a right eye movement bias has not been investigated. To address these issues older and younger adults’ eye movements and gender judgments of chimeric faces were recorded in two time conditions. Age did not significantly weaken the LPB or eye movement bias; both groups looked initially to the left side of the face and made more fixations when the gender judgment was based on the left side. A positive association was found between LPB and initial saccades in the freeview condition and with all eye movements (initial saccades, number and duration of fixations) when time was restricted. The accompanying eye movement bias revealed by LPB participants contrasted with RPB participants who demonstrated no eye movement bias in either time condition. Consequently, increased age is not clearly associated with weakened perceptual and eye movement biases. Instead an eye movement bias accompanies an LPB (particularly under restricted viewing time conditions) but not an RPB

    Angry expressions strengthen the encoding and maintenance of face identity representations in visual working memory

    Get PDF
    This work was funded by a BBSRC grant (BB/G021538/2) to all authors.Peer reviewedPreprin
    • 

    corecore