1,148 research outputs found

    Eye-tracking w badaniach kulturowych

    Get PDF
    Eye-tracking is a technology based on tracking the movement of eye­balls. The results of the study allow a detailed analysis of the path of sight, and provide answers to the questions: what are we looking at, what we focus on and what we ignore despite that the objects are lo­cated in our field of view. The eye movement tracking is not a new technology, but it is constantly improved and is gaining importance in many fields of science and consumer market research. Contempo­rary culture, oriented to image absorption, is a perfect surface for non-standard eye-tracking research.Eye-tracking to technologia, której działanie polega na śledzeniu ruchu gałek ocznych. Wyniki badania pozwalają na szczegółową ana­lizę ścieżki wzroku, udzielają odpowiedzi na pytania, na co patrzymy, na czym skupiamy największą uwagę, a co ignorujemy i czego nie do­strzegamy, pomimo tego, że znajduje się w polu widzenia. Śledzenie ruchu gałek ocznych nie jest technologią nową, jednak stale udosko­nalane, zyskuje coraz większe znaczenie w życiu codziennym, wielu dziedzinach nauki i w badaniach rynku konsumenckiego. Współ­czesna kultura, zorientowana na absorbcję obrazów, jest szczególnie doskonałą płaszczyzną, na której badania eye-trackingowe znajdują wiele często niestandardowych zastosowań

    Eye-Tracking Signals Based Affective Classification Employing Deep Gradient Convolutional Neural Networks

    Get PDF
    Utilizing biomedical signals as a basis to calculate the human affective states is an essential issue of affective computing (AC). With the in-depth research on affective signals, the combination of multi-model cognition and physiological indicators, the establishment of a dynamic and complete database, and the addition of high-tech innovative products become recent trends in AC. This research aims to develop a deep gradient convolutional neural network (DGCNN) for classifying affection by using an eye-tracking signals. General signal process tools and pre-processing methods were applied firstly, such as Kalman filter, windowing with hamming, short-time Fourier transform (SIFT), and fast Fourier transform (FTT). Secondly, the eye-moving and tracking signals were converted into images. A convolutional neural networks-based training structure was subsequently applied; the experimental dataset was acquired by an eye-tracking device by assigning four affective stimuli (nervous, calm, happy, and sad) of 16 participants. Finally, the performance of DGCNN was compared with a decision tree (DT), Bayesian Gaussian model (BGM), and k-nearest neighbor (KNN) by using indices of true positive rate (TPR) and false negative rate (FPR). Customizing mini-batch, loss, learning rate, and gradients definition for the training structure of the deep neural network was also deployed finally. The predictive classification matrix showed the effectiveness of the proposed method for eye moving and tracking signals, which performs more than 87.2% inaccuracy. This research provided a feasible way to find more natural human-computer interaction through eye moving and tracking signals and has potential application on the affective production design process

    Eye Tracking Methods for Analysis of Visuo-Cognitive Behavior in Medical Imaging

    Get PDF
    Predictive modeling of human visual search behavior and the underlying metacognitive processes is now possible thanks to significant advances in bio-sensing device technology and machine intelligence. Eye tracking bio-sensors, for example, can measure psycho-physiological response through change events in configuration of the human eye. These events include positional changes such as visual fixation, saccadic movements, and scanpath, and non-positional changes such as blinks and pupil dilation and constriction. Using data from eye-tracking sensors, we can model human perception, cognitive processes, and responses to external stimuli. In this study, we investigated the visuo-cognitive behavior of clinicians during the diagnostic decision process for breast cancer screening under clinically equivalent experimental conditions involving multiple monitors and breast projection views. Using a head-mounted eye tracking device and a customized user interface, we recorded eye change events and diagnostic decisions from 10 clinicians (three breast-imaging radiologists and seven Radiology residents) for a corpus of 100 screening mammograms (comprising cases of varied pathology and breast parenchyma density). We proposed novel features and gaze analysis techniques, which help to encode discriminative pattern changes in positional and non-positional measures of eye events. These changes were shown to correlate with individual image readers' identity and experience level, mammographic case pathology and breast parenchyma density, and diagnostic decision. Furthermore, our results suggest that a combination of machine intelligence and bio-sensing modalities can provide adequate predictive capability for the characterization of a mammographic case and image readers diagnostic performance. Lastly, features characterizing eye movements can be utilized for biometric identification purposes. These findings are impactful in real-time performance monitoring and personalized intelligent training and evaluation systems in screening mammography. Further, the developed algorithms are applicable in other application domains involving high-risk visual tasks

    Pupil size signals novelty and predicts later retrieval success for declarative memories of natural scenes

    Get PDF
    Declarative memories of personal experiences are a key factor in defining oneself as an individual, which becomes particularly evident when this capability is impaired. Assessing the physiological mechanisms of human declarative memory is typically restricted to patients with specific lesions and requires invasive brain access or functional imaging. We investigated whether the pupil, an accessible physiological measure, can be utilized to probe memories for complex natural visual scenes. During memory encoding, scenes that were later remembered elicited a stronger pupil constriction compared to scenes that were later forgotten. Thus, pupil size predicts success or failure of memory formation. In contrast, novel scenes elicited stronger pupil constriction than familiar scenes during retrieval. When viewing previously memorized scenes, those that were forgotten (misjudged as novel) still elicited stronger pupil constrictions than those correctly judged as familiar. Furthermore, pupil constriction was influenced more strongly if images were judged with high confidence. Thus, we propose that pupil constriction can serve as a marker of novelty. Since stimulus novelty modulates the efficacy of memory formation, our pupil measurements during learning indicate that the later forgotten images were perceived as less novel than the later remembered pictures. Taken together, our data provide evidence that pupil constriction is a physiological correlate of a neural novelty signal during formation and retrieval of declarative memories for complex, natural scenes

    Pupil size signals novelty and predicts later retrieval success for declarative memories of natural scenes

    Get PDF
    Declarative memories of personal experiences are a key factor in defining oneself as an individual, which becomes particularly evident when this capability is impaired. Assessing the physiological mechanisms of human declarative memory is typically restricted to patients with specific lesions and requires invasive brain access or functional imaging. We investigated whether the pupil, an accessible physiological measure, can be utilized to probe memories for complex natural visual scenes. During memory encoding, scenes that were later remembered elicited a stronger pupil constriction compared to scenes that were later forgotten. Thus, pupil size predicts success or failure of memory formation. In contrast, novel scenes elicited stronger pupil constriction than familiar scenes during retrieval. When viewing previously memorized scenes, those that were forgotten (misjudged as novel) still elicited stronger pupil constrictions than those correctly judged as familiar. Furthermore, pupil constriction was influenced more strongly if images were judged with high confidence. Thus, we propose that pupil constriction can serve as a marker of novelty. Since stimulus novelty modulates the efficacy of memory formation, our pupil measurements during learning indicate that the later forgotten images were perceived as less novel than the later remembered pictures. Taken together, our data provide evidence that pupil constriction is a physiological correlate of a neural novelty signal during formation and retrieval of declarative memories for complex, natural scenes

    Estimating the subjective perception of object size and position through brain imaging and psychophysics

    Get PDF
    Perception is subjective and context-dependent. Size and position perception are no exceptions. Studies have shown that apparent object size is represented by the retinotopic location of peak response in V1. Such representation is likely supported by a combination of V1 architecture and top-down driven retinotopic reorganisation. Are apparent object size and position encoded via a common mechanism? Using functional magnetic resonance imaging and a model-based reconstruction technique, the first part of this thesis sets out to test if retinotopic encoding of size percepts can be generalised to apparent position representation and whether neural signatures could be used to predict an individual’s perceptual experience. Here, I present evidence that static apparent position – induced by a dot-variant Muller-Lyer illusion – is represented retinotopically in V1. However, there is mixed evidence for retinotopic representation of motion-induced position shifts (e.g. curveball illusion) in early visual areas. My findings could be reconciled by assuming dual representation of veridical and percept-based information in early visual areas, which is consistent with the larger framework of predictive coding. The second part of the thesis sets out to compare different psychophysical methods for measuring size perception in the Ebbinghaus illusion. Consistent with the idea that psychophysical methods are not equally susceptible to cognitive factors, my experiments reveal a consistent discrepancy in illusion magnitude estimates between a traditional forced choice (2AFC) task and a novel perceptual matching (PM) task – a variant of a comparison-of-comparisons (CoC) task, a design widely seen as the gold standard in psychophysics. Further investigation reveals the difference was not driven by greater 2AFC susceptibility to cognitive factors, but a tendency for PM to skew illusion magnitude estimates towards the underlying stimulus distribution. I show that this dependency can be largely corrected using adaptive stimulus sampling

    Event-based neuromorphic stereo vision

    Full text link

    An end-to-end review of gaze estimation and its interactive applications on handheld mobile devices

    Get PDF
    In recent years we have witnessed an increasing number of interactive systems on handheld mobile devices which utilise gaze as a single or complementary interaction modality. This trend is driven by the enhanced computational power of these devices, higher resolution and capacity of their cameras, and improved gaze estimation accuracy obtained from advanced machine learning techniques, especially in deep learning. As the literature is fast progressing, there is a pressing need to review the state of the art, delineate the boundary, and identify the key research challenges and opportunities in gaze estimation and interaction. This paper aims to serve this purpose by presenting an end-to-end holistic view in this area, from gaze capturing sensors, to gaze estimation workflows, to deep learning techniques, and to gaze interactive applications.PostprintPeer reviewe
    corecore