5,996 research outputs found

    Personalization of Saliency Estimation

    Full text link
    Most existing saliency models use low-level features or task descriptions when generating attention predictions. However, the link between observer characteristics and gaze patterns is rarely investigated. We present a novel saliency prediction technique which takes viewers' identities and personal traits into consideration when modeling human attention. Instead of only computing image salience for average observers, we consider the interpersonal variation in the viewing behaviors of observers with different personal traits and backgrounds. We present an enriched derivative of the GAN network, which is able to generate personalized saliency predictions when fed with image stimuli and specific information about the observer. Our model contains a generator which generates grayscale saliency heat maps based on the image and an observer label. The generator is paired with an adversarial discriminator which learns to distinguish generated salience from ground truth salience. The discriminator also has the observer label as an input, which contributes to the personalization ability of our approach. We evaluate the performance of our personalized salience model by comparison with a benchmark model along with other un-personalized predictions, and illustrate improvements in prediction accuracy for all tested observer groups

    Artificially created stimuli produced by a genetic algorithm using a saliency model as its fitness function show that Inattentional Blindness modulates performance in a pop-out visual search paradigm

    Get PDF
    Salient stimuli are more readily detected than less salient stimuli, and individual differences in such detection may be relevant to why some people fail to notice an unexpected stimulus that appears in their visual field whereas others do notice it. This failure to notice unexpected stimuli is termed 'Inattentional Blindness' and is more likely to occur when we are engaged in a resource-consuming task. A genetic algorithm is described in which artificial stimuli are created using a saliency model as its fitness function. These generated stimuli, which vary in their saliency level, are used in two studies that implement a pop-out visual search task to evaluate the power of the model to discriminate the performance of people who were and were not Inattentionally Blind (IB). In one study the number of orientational filters in the model was increased to check if discriminatory power and the saliency estimation for low-level images could be improved. Results show that the performance of the model does improve when additional filters are included, leading to the conclusion that low-level images may require a higher number of orientational filters for the model to better predict participants' performance. In both studies we found that given the same target patch image (i.e. same saliency value) IB individuals take longer to identify a target compared to non-IB individuals. This suggests that IB individuals require a higher level of saliency for low-level visual features in order to identify target patches

    SeizureNet: Multi-Spectral Deep Feature Learning for Seizure Type Classification

    Full text link
    Automatic classification of epileptic seizure types in electroencephalograms (EEGs) data can enable more precise diagnosis and efficient management of the disease. This task is challenging due to factors such as low signal-to-noise ratios, signal artefacts, high variance in seizure semiology among epileptic patients, and limited availability of clinical data. To overcome these challenges, in this paper, we present SeizureNet, a deep learning framework which learns multi-spectral feature embeddings using an ensemble architecture for cross-patient seizure type classification. We used the recently released TUH EEG Seizure Corpus (V1.4.0 and V1.5.2) to evaluate the performance of SeizureNet. Experiments show that SeizureNet can reach a weighted F1 score of up to 0.94 for seizure-wise cross validation and 0.59 for patient-wise cross validation for scalp EEG based multi-class seizure type classification. We also show that the high-level feature embeddings learnt by SeizureNet considerably improve the accuracy of smaller networks through knowledge distillation for applications with low-memory constraints

    Deep learning investigation for chess player attention prediction using eye-tracking and game data

    Get PDF
    This article reports on an investigation of the use of convolutional neural networks to predict the visual attention of chess players. The visual attention model described in this article has been created to generate saliency maps that capture hierarchical and spatial features of chessboard, in order to predict the probability fixation for individual pixels Using a skip-layer architecture of an autoencoder, with a unified decoder, we are able to use multiscale features to predict saliency of part of the board at different scales, showing multiple relations between pieces. We have used scan path and fixation data from players engaged in solving chess problems, to compute 6600 saliency maps associated to the corresponding chess piece configurations. This corpus is completed with synthetically generated data from actual games gathered from an online chess platform. Experiments realized using both scan-paths from chess players and the CAT2000 saliency dataset of natural images, highlights several results. Deep features, pretrained on natural images, were found to be helpful in training visual attention prediction for chess. The proposed neural network architecture is able to generate meaningful saliency maps on unseen chess configurations with good scores on standard metrics. This work provides a baseline for future work on visual attention prediction in similar contexts

    DISC: Deep Image Saliency Computing via Progressive Representation Learning

    Full text link
    Salient object detection increasingly receives attention as an important component or step in several pattern recognition and image processing tasks. Although a variety of powerful saliency models have been intensively proposed, they usually involve heavy feature (or model) engineering based on priors (or assumptions) about the properties of objects and backgrounds. Inspired by the effectiveness of recently developed feature learning, we provide a novel Deep Image Saliency Computing (DISC) framework for fine-grained image saliency computing. In particular, we model the image saliency from both the coarse- and fine-level observations, and utilize the deep convolutional neural network (CNN) to learn the saliency representation in a progressive manner. Specifically, our saliency model is built upon two stacked CNNs. The first CNN generates a coarse-level saliency map by taking the overall image as the input, roughly identifying saliency regions in the global context. Furthermore, we integrate superpixel-based local context information in the first CNN to refine the coarse-level saliency map. Guided by the coarse saliency map, the second CNN focuses on the local context to produce fine-grained and accurate saliency map while preserving object details. For a testing image, the two CNNs collaboratively conduct the saliency computing in one shot. Our DISC framework is capable of uniformly highlighting the objects-of-interest from complex background while preserving well object details. Extensive experiments on several standard benchmarks suggest that DISC outperforms other state-of-the-art methods and it also generalizes well across datasets without additional training. The executable version of DISC is available online: http://vision.sysu.edu.cn/projects/DISC.Comment: This manuscript is the accepted version for IEEE Transactions on Neural Networks and Learning Systems (T-NNLS), 201
    • …
    corecore