8,784 research outputs found

    Are all the frames equally important?

    Full text link
    In this work, we address the problem of measuring and predicting temporal video saliency - a metric which defines the importance of a video frame for human attention. Unlike the conventional spatial saliency which defines the location of the salient regions within a frame (as it is done for still images), temporal saliency considers importance of a frame as a whole and may not exist apart from context. The proposed interface is an interactive cursor-based algorithm for collecting experimental data about temporal saliency. We collect the first human responses and perform their analysis. As a result, we show that qualitatively, the produced scores have very explicit meaning of the semantic changes in a frame, while quantitatively being highly correlated between all the observers. Apart from that, we show that the proposed tool can simultaneously collect fixations similar to the ones produced by eye-tracker in a more affordable way. Further, this approach may be used for creation of first temporal saliency datasets which will allow training computational predictive algorithms. The proposed interface does not rely on any special equipment, which allows to run it remotely and cover a wide audience.Comment: CHI'20 Late Breaking Work

    How is Gaze Influenced by Image Transformations? Dataset and Model

    Full text link
    Data size is the bottleneck for developing deep saliency models, because collecting eye-movement data is very time consuming and expensive. Most of current studies on human attention and saliency modeling have used high quality stereotype stimuli. In real world, however, captured images undergo various types of transformations. Can we use these transformations to augment existing saliency datasets? Here, we first create a novel saliency dataset including fixations of 10 observers over 1900 images degraded by 19 types of transformations. Second, by analyzing eye movements, we find that observers look at different locations over transformed versus original images. Third, we utilize the new data over transformed images, called data augmentation transformation (DAT), to train deep saliency models. We find that label preserving DATs with negligible impact on human gaze boost saliency prediction, whereas some other DATs that severely impact human gaze degrade the performance. These label preserving valid augmentation transformations provide a solution to enlarge existing saliency datasets. Finally, we introduce a novel saliency model based on generative adversarial network (dubbed GazeGAN). A modified UNet is proposed as the generator of the GazeGAN, which combines classic skip connections with a novel center-surround connection (CSC), in order to leverage multi level features. We also propose a histogram loss based on Alternative Chi Square Distance (ACS HistLoss) to refine the saliency map in terms of luminance distribution. Extensive experiments and comparisons over 3 datasets indicate that GazeGAN achieves the best performance in terms of popular saliency evaluation metrics, and is more robust to various perturbations. Our code and data are available at: https://github.com/CZHQuality/Sal-CFS-GAN

    Trajectory Data Mining in Mouse Models of Stroke

    Get PDF
    Contains fulltext : 273912.pdf (Publisher’s version ) (Open Access)Radboud University, 04 oktober 2022Promotor : Kiliaan, A.J. Co-promotor : Wiesmann, M.167 p

    Aerospace Medicine and Biology: A continuing bibliography with indexes, supplement 182, July 1978

    Get PDF
    This bibliography lists 165 reports, articles, and other documents introduced into the NASA scientific and technical information system in June 1978

    A review of 28 free animal tracking software: current features and limitations

    Get PDF
    This version of the article has been accepted for publication, after peer review and is subject to Springer Nature’s AM terms of use, but is not the Version of Record and does not reflect post-acceptance improvements, or any corrections. The Version of Record is available online at: http://dx.doi.org/10.1038/s41684-021-00811-1[Abstract]: Well-quantified laboratory studies can provide a fundamental understanding of animal behavior in ecology, ethology and ecotoxicology research. These types of studies require observation and tracking of each animal in well-controlled and defined arenas, often for long timescales. Thus, these experiments produce long time series and a vast amount of data that require the use of software applications to automate the analysis and reduce manual annotation. In this review, we examine 28 free software applications for animal tracking to guide researchers in selecting the software that might best suit a particular experiment. We also review the algorithms in the tracking pipeline of the applications, explain how specific techniques can fit different experiments, and finally, expose each approach’s weaknesses and strengths. Our in-depth review includes last update, type of platform, user-friendliness, off- or online video acquisition, calibration method, background subtraction and segmentation method, species, multiple arenas, multiple animals, identity preservation, manual identity correction, data analysis and extra features. We found, for example, that out of 28 programs, only 3 include a calibration algorithm to reduce image distortion and perspective problems that affect accuracy and can result in substantial errors when analyzing trajectories and extracting mobility or explored distance. In addition, only 4 programs can directly export in-depth tracking and analysis metrics, only 5 are suited for tracking multiple unmarked animals for more than a few seconds and only 11 have been updated in the period 2019–2021

    Eye-tracking assistive technologies for individuals with amyotrophic lateral sclerosis

    Get PDF
    Amyotrophic lateral sclerosis, also known as ALS, is a progressive nervous system disorder that affects nerve cells in the brain and spinal cord, resulting in the loss of muscle control. For individuals with ALS, where mobility is limited to the movement of the eyes, the use of eye-tracking-based applications can be applied to achieve some basic tasks with certain digital interfaces. This paper presents a review of existing eye-tracking software and hardware through which eye-tracking their application is sketched as an assistive technology to cope with ALS. Eye-tracking also provides a suitable alternative as control of game elements. Furthermore, artificial intelligence has been utilized to improve eye-tracking technology with significant improvement in calibration and accuracy. Gaps in literature are highlighted in the study to offer a direction for future research

    Socio-cognitive profiles for visual learning in young and older adults.

    Get PDF
    It is common wisdom that practice makes perfect; but why do some adults learn better than others? Here, we investigate individuals' cognitive and social profiles to test which variables account for variability in learning ability across the lifespan. In particular, we focused on visual learning using tasks that test the ability to inhibit distractors and select task-relevant features. We tested the ability of young and older adults to improve through training in the discrimination of visual global forms embedded in a cluttered background. Further, we used a battery of cognitive tasks and psycho-social measures to examine which of these variables predict training-induced improvement in perceptual tasks and may account for individual variability in learning ability. Using partial least squares regression modeling, we show that visual learning is influenced by cognitive (i.e., cognitive inhibition, attention) and social (strategic and deep learning) factors rather than an individual's age alone. Further, our results show that independent of age, strong learners rely on cognitive factors such as attention, while weaker learners use more general cognitive strategies. Our findings suggest an important role for higher-cognitive circuits involving executive functions that contribute to our ability to improve in perceptual tasks after training across the lifespan.This work was supported by grants to ZK from the Leverhulme Trust [RF-2011- 378] and the [European Community’s] Seventh Framework Programme [FP7/2007-2013] under agreement PITN-GA-2011- 290011 and Biotechnology and Biological Sciences Research Council [D52199X,E027436].This is the final version. It was first published by Frontiers at http://journal.frontiersin.org/article/10.3389/fnagi.2015.00105/abstract
    • …
    corecore