999 research outputs found

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    Detecting emotional expressions: Do words help?

    Get PDF

    Visual and control aspects of saccadic eye movements

    Get PDF
    Physiological, behavioral, and control investigation of rapid saccadic jump eye movement in human

    Neural Network Dynamics of Visual Processing in the Higher-Order Visual System

    Get PDF
    Vision is one of the most important human senses that facilitate rich interaction with the external environment. For example, optimal spatial localization and subsequent motor contact with a specific physical object amongst others requires a combination of visual attention, discrimination, and sensory-motor coordination. The mammalian brain has evolved to elegantly solve this problem of transforming visual input into an efficient motor output to interact with an object of interest. The frontal and parietal cortices are two higher-order (i.e. processes information beyond simple sensory transformations) brain areas that are intimately involved in assessing how an animal’s internal state or prior experiences should influence cognitive-behavioral output. It is well known that activity within each region and functional interactions between both regions are correlated with visual attention, decision-making, and memory performance. Therefore, it is not surprising that impairment in the fronto-parietal circuit is often observed in many psychiatric disorders. Network- and circuit-level fronto-parietal involvement in sensory-based behavior is well studied; however, comparatively less is known about how single neuron activity in each of these areas can give rise to such macroscopic activity. The goal of the studies in this dissertation is to address this gap in knowledge through simultaneous recordings of cellular and population activity during sensory processing and behavioral paradigms. Together, the combined narrative builds on several themes in neuroscience: variability of single cell function, population-level encoding of stimulus properties, and state and context-dependent neural dynamics.Doctor of Philosoph

    Free-Flight Odor Tracking in Drosophila Is Consistent with an Optimal Intermittent Scale-Free Search

    Get PDF
    During their trajectories in still air, fruit flies (Drosophila melanogaster) explore their landscape using a series of straight flight paths punctuated by rapid 90° body-saccades [1]. Some saccades are triggered by visual expansion associated with collision avoidance. Yet many saccades are not triggered by visual cues, but rather appear spontaneously. Our analysis reveals that the control of these visually independent saccades and the flight intervals between them constitute an optimal scale-free active searching strategy. Two characteristics of mathematical optimality that are apparent during free-flight in Drosophila are inter-saccade interval lengths distributed according to an inverse square law, which does not vary across landscape scale, and 90° saccade angles, which increase the likelihood that territory will be revisited and thereby reduce the likelihood that near-by targets will be missed. We also show that searching is intermittent, such that active searching phases randomly alternate with relocation phases. Behaviorally, this intermittency is reflected in frequently occurring short, slow speed inter-saccade intervals randomly alternating with rarer, longer, faster inter-saccade intervals. Searching patterns that scale similarly across orders of magnitude of length (i.e., scale-free) have been revealed in animals as diverse as microzooplankton, bumblebees, albatrosses, and spider monkeys, but these do not appear to be optimised with respect to turning angle, whereas Drosophila free-flight search does. Also, intermittent searching patterns, such as those reported here for Drosophila, have been observed in foragers such as planktivorous fish and ground foraging birds. Our results with freely flying Drosophila may constitute the first reported example of searching behaviour that is both scale-free and intermittent

    Encoding of saccadic scene changes in the mouse retina

    Get PDF
    The task of the visual system is to extract behaviourally relevant information from the visual scene. A common strategy for most animals ranging from insects to humans is to constantly reposition gaze by making saccades within the scene. This ‘fixate and saccade’ strategy seems to pose a challenge, as it introduces a highly blurred image on the retina during a saccade, but at the same time acquires a ‘snapshot’ of the world during every fixation. The visual signals on the retina are thus segmented into brief image fixations separated by global motion. What is the response of a ganglion cell to ‘motion blur’ caused by a saccade, and how does it influence the response to subsequent fixations? Also, how does the global motion signal influence the response dynamics of a ganglion cell? In this thesis, we addressed these questions by two complementary approaches. First, we analysed the retinal ganglion cell responses to simulated saccades. We analysed two important aspects of the response - 1) response during a saccade-like motion, 2) response to fixation images. For about half of the recorded cells, we found strong spiking activity during the saccade. This supports the idea that the retina actively encodes the saccade and may signal the abrupt scene change to downstream brain areas. Furthermore, we characterized the responses to the newly fixated image. While there appears to be only little influence of the preceding motion signal itself on these responses, the responses depended strongly on the image content during the fixation period prior to the saccade. Thus, saccadic vision may provide ‘temporal context’ to each fixation, and ganglion cells encode image transitions rather than currently fixated images. Based on this perspective, we classified retinal ganglion cells into five response types, suggesting that the retina encodes at least five parallel channels of information under saccadic visual stimulation. The five response types identified in this study are as follows: 1) Classical Encoders - Response only to preferred stimuli; 2) Offset Detectors - Response only to the saccade; 3) Indifferent Encoders - Response to all fixated images; 4) Change Detectors - Response only when the new image after the saccade differs from the previous image; 5) Similarity Detectors - Response only when the new image after the saccade is similar to the previous image. Second, we analysed the influence of global motion signals on the response of a retinal ganglion cell to the stimulus in its receptive field. The stimulus beyond the receptive field is designated as remote stimulus. We chose simple stimulus that represent various configurations used in earlier studies, thus allowing us to compare our results. We show that the remote stimulus both enhances and suppresses the mean firing rate, but only suppresses the evoked activity. Furthermore, we show that the remote stimulus decreases the contrast sensitivity and modifies the response gain. Thus, the ganglion cells encode the stimulus in relation to the whole scene, rather than purely respond to the stimulus in the receptive field. Our results suggest that the global motion signals provide ‘spatial context’ to the response of the stimulus within the receptive field

    Content-prioritised video coding for British Sign Language communication.

    Get PDF
    Video communication of British Sign Language (BSL) is important for remote interpersonal communication and for the equal provision of services for deaf people. However, the use of video telephony and video conferencing applications for BSL communication is limited by inadequate video quality. BSL is a highly structured, linguistically complete, natural language system that expresses vocabulary and grammar visually and spatially using a complex combination of facial expressions (such as eyebrow movements, eye blinks and mouth/lip shapes), hand gestures, body movements and finger-spelling that change in space and time. Accurate natural BSL communication places specific demands on visual media applications which must compress video image data for efficient transmission. Current video compression schemes apply methods to reduce statistical redundancy and perceptual irrelevance in video image data based on a general model of Human Visual System (HVS) sensitivities. This thesis presents novel video image coding methods developed to achieve the conflicting requirements for high image quality and efficient coding. Novel methods of prioritising visually important video image content for optimised video coding are developed to exploit the HVS spatial and temporal response mechanisms of BSL users (determined by Eye Movement Tracking) and the characteristics of BSL video image content. The methods implement an accurate model of HVS foveation, applied in the spatial and temporal domains, at the pre-processing stage of a current standard-based system (H.264). Comparison of the performance of the developed and standard coding systems, using methods of video quality evaluation developed for this thesis, demonstrates improved perceived quality at low bit rates. BSL users, broadcasters and service providers benefit from the perception of high quality video over a range of available transmission bandwidths. The research community benefits from a new approach to video coding optimisation and better understanding of the communication needs of deaf people

    Perception of Color Break-Up

    Get PDF
    Hintergrund. Ein farbverfälschender Bildfehler namens Color Break-Up (CBU) wurde untersucht. Störende CBU-Effekte treten auf, wenn Augenbewegungen (z.B. Folgebewegungen oder Sakkaden) während der Content-Wiedergabe über sogenannte Field-Sequential Color (FSC) Displays oder Projektoren ausgeführt werden. Die Ursache für das Auftreten des CBU-Effektes ist die sequenzielle Anzeige der Primärfarben über das FSC-System. Methoden. Ein kombiniertes Design aus empirischer Forschung und theoretischer Modellierung wurde angewendet. Mittels empirischer Studien wurde der Einfluss von hardware-, content- und betrachterbasierten Faktoren auf die CBU-Wahrnehmung der Stichprobe untersucht. Hierzu wurden zunächst Sehleistung (u. a. Farbsehen), Kurzzeitzustand (u. a. Aufmerksamkeit) und Persönlichkeitsmerkmale (u. a. Technikaffinität) der Stichprobe erfasst. Anschließend wurden die Teilnehmenden gebeten, die wahrgenommene CBU-Intensität verschiedener Videosequenzen zu bewerten. Die Sequenzen wurden mit einem FSC-Projektor wiedergegeben. Das verwendete Setup ermöglichte die Untersuchung folgender Variablen: die Größe (1.0 bis 6.0°) und Leuchtdichte (10.0 bis 157.0 cd/m2) des CBU-provozierenden Contents, das Augenbewegungsmuster des Teilnehmenden (Geschwindigkeit der Folgebewegung: 18.0 bis 54.0 °/s; Amplitude der Sakkade: 3.6 bis 28.2°), die Position der Netzhautstimulation (0.0 bis 50.0°) und die Bildrate des Projektors (30.0 bis 420.0 Hz). Korrelationen zwischen den unabhängigen Variablen und der subjektiven CBU-Wahrnehmung wurden getestet. Das ergänzend entwickelte Modell prognostiziert die CBU-Wahrnehmung eines Betrachters auf theoretische Weise. Das Modell rekonstruiert die Intensitäts- und Farbeigenschaften von CBU-Effekten zunächst grafisch. Anschließend wird die visuelle CBU-Rekonstruktion zu repräsentativen Modellindizes komprimiert, um das modellierte Szenario mit einem handhabbaren Satz von Metriken zu quantifizieren. Die Modellergebnisse wurden abschließend mit den empirischen Daten verglichen. Ergebnisse. Die hohe interindividuelle CBU-Variabilität innerhalb der Stichprobe lässt sich nicht durch die Sehleistung, den Kurzzeitzustand oder die Persönlichkeitsmerkmale eines Teilnehmenden erklären. Eindeutig verstärkende Bedingungen der CBU-Wahrnehmung sind: (1) eine foveale Position des CBU-Stimulus, (2) eine reduzierte Stimulusgröße während Sakkaden, (3) eine hohe Bewegungsgeschwindigkeit des Auges und (4) eine niedrige Bildrate des Projektors (Korrelation durch Exponentialfunktion beschreibbar, r2 > .93). Die Leuchtdichte des Stimulus wirkt sich nur geringfügig auf die CBU-Wahrnehmung aus. Generell hilft das Modell, die grundlegenden Prozesse der CBU-Genese zu verstehen, den Einfluss von CBU-Determinanten zu untersuchen und ein Klassifizierungsschema für verschiedene CBU-Varianten zu erstellen. Das Modell prognostiziert die empirischen Daten innerhalb der angegebenen Toleranzbereiche. Schlussfolgerungen. Die Studienergebnisse ermöglichen die Festlegung von Bildraten und Eigenschaften des CBU-provozierenden Content (Größe und Position), die das Überschreiten vordefinierter, störender CBU-Grenzwerte vermeiden. Die abgeleiteten Hardwareanforderungen und Content-Empfehlungen ermöglichen ein praxisnahes und evidenzbasiertes CBU-Management. Für die Vorhersage von CBU kann die Modellgenauigkeit weiter verbessert werden, indem Merkmale der menschlichen Wahrnehmung berücksichtigt werden, z.B. die exzentrizitätsabhängige Netzhautempfindlichkeit oder Änderungen der visuellen Wahrnehmung bei unterschiedlichen Arten von Augenbewegungen. Zur Modellierung dieser Merkmale können teilnehmerbezogene Daten der empirischen Forschung herangezogen werden.Background. A color-distorting artifact called Color Break-Up (CBU) has been investigated. Disturbing CBU effects occur when eye movements (e.g., pursuits or saccades) are performed during the presentation of content on Field-Sequential Color (FSC) display or projection systems where the primary colors are displayed sequentially rather than simultaneously. Methods. A mixed design of empirical research and theoretical modeling was used to address the main research questions. Conducted studies evaluated the impact of hardware-based, content-based, and viewer-based factors on the sample’s CBU perception. In a first step, visual performance parameters (e.g., color vision), short-term state (e.g., attention level), and long-term personality traits (e.g., affinity to technology) of the sample were recorded. Participants were then asked to rate the perceived CBU intensity for different video sequences presented by a FSC-based projector. The applied setup allowed the size of the CBU-provoking content (1.0 to 6.0°), its luminance level (10.0 to 157.0 cd/m2), the participant’s eye movement pattern (pursuits: 18.0 to 54.0 deg/s; saccadic amplitudes: 3.6 to 28.2°), the position of retinal stimulation (0.0 to 50.0°), and the projector’s frame rate (30.0 to 420.0 Hz) to be varied. Correlations between independent variables and subjective CBU perception were tested. In contrast, the developed model predicts a viewer’s CBU perception on an objective basis. The model graphically reconstructs the intensity and color characteristics of CBU effects. The visual CBU reconstruction is then compressed into representative model indices to quantify the modeled scenario with a manageable set of metrics. Finally, the model output was compared to the empirical data. Results. High interindividual CBU variability within the sample cannot be explained by a participant’s visual performance, short-term state or long-term personality traits. Conditions that distinctly elevate the participant’s CBU perception are (1) a foveal stimulus position on the retina, (2) a small-sized stimulus during saccades, (3) a high eye movement velocity, and (4) a low frame rate of the projector (correlation expressed by exponential function, r2 > .93). The stimulus luminance, however, only slightly affects CBU perception. In general, the model helps to understand the fundamental processes of CBU genesis, to investigate the impact of CBU determinants, and to establish a classification scheme for different CBU variants. The model adequately predicts the empirical data within the specified tolerance ranges. Conclusions. The study results allow the determination of frame rates and content characteristics (size and position) that avoid exceeding predefined annoyance thresholds for CBU perception. The derived hardware requirements and content recommendations enable practical and evidence-based CBU management. For CBU prediction, model accuracy can be further improved by considering features of human perception, e.g., eccentricity-dependent retinal sensitivity or changes in visual perception with different types of eye movements. Participant-based data from the empirical research can be used to model these features

    Natural stimuli for mice: environment statistics and behavioral responses

    Get PDF
    • …
    corecore