817 research outputs found
Kinematic Analysis of Rapid Eye Movements for Vestibular Disorders
The system under development provides a means to assess the semi-circular canals of the human vestibular system. The Impulse Test is a simple method to detect disorders within the three sets of semi-circular canals, by stimulating each pair of canals in turn. This report describes the work carried out to develop a simple, non-intrusive system whereby the patient can be assessed in a matter of seconds. The system consists of a single high-speed monochrome camera connected to a computer with the developed software. The main area of work so far, has been the implementation of an accurate image processing technique to track both the head and the eyes. Pattern recognition was attempted first, but this was met with limited success. The method of image processing then shifted to thresholding performed upon the eye. Modelling the head and eye in three dimensions were also integral parts of the project. The eye's origin must be accurately represented as eye velocity is measured relative to this point. Therefore, inaccuracies in describing the eye's origin yield misleading results
Deep into the Eyes: Applying Machine Learning to improve Eye-Tracking
Eye-tracking has been an active research area with applications in personal and behav- ioral studies, medical diagnosis, virtual reality, and mixed reality applications. Improving the robustness, generalizability, accuracy, and precision of eye-trackers while maintaining privacy is crucial. Unfortunately, many existing low-cost portable commercial eye trackers suffer from signal artifacts and a low signal-to-noise ratio. These trackers are highly depen- dent on low-level features such as pupil edges or diffused bright spots in order to precisely localize the pupil and corneal reflection. As a result, they are not reliable for studying eye movements that require high precision, such as microsaccades, smooth pursuit, and ver- gence. Additionally, these methods suffer from reflective artifacts, occlusion of the pupil boundary by the eyelid and often require a manual update of person-dependent parame- ters to identify the pupil region. In this dissertation, I demonstrate (I) a new method to improve precision while maintaining the accuracy of head-fixed eye trackers by combin- ing velocity information from iris textures across frames with position information, (II) a generalized semantic segmentation framework for identifying eye regions with a further extension to identify ellipse fits on the pupil and iris, (III) a data-driven rendering pipeline to generate a temporally contiguous synthetic dataset for use in many eye-tracking ap- plications, and (IV) a novel strategy to preserve privacy in eye videos captured as part of the eye-tracking process. My work also provides the foundation for future research by addressing critical questions like the suitability of using synthetic datasets to improve eye-tracking performance in real-world applications, and ways to improve the precision of future commercial eye trackers with improved camera specifications
Collecting and Analyzing Eye-Tracking Data in Outdoor Environments
Natural outdoor conditions pose unique obstacles for researchers, above and beyond those inherent to all mobile eye-tracking research. During analyses of a large set of eye-tracking data collected on geologists examining outdoor scenes, we have found that the nature of calibration, pupil identification, fixation detection, and gaze analysis all require procedures different from those typically used for indoor studies. Here, we discuss each of these challenges and present solutions, which together define a general method useful for investigations relying on outdoor eye-tracking data. We also discuss recommendations for improving the tools that are available, to further increase the accuracy and utility of outdoor eye-tracking data
Event Detection in Eye-Tracking Data for Use in Applications with Dynamic Stimuli
This doctoral thesis has signal processing of eye-tracking data as its main theme. An eye-tracker is a tool used for estimation of the point where one is looking. Automatic algorithms for classification of different types of eye movements, so called events, form the basis for relating the eye-tracking data to cognitive processes during, e.g., reading a text or watching a movie. The problems with the algorithms available today are that there are few algorithms that can handle detection of events during dynamic stimuli and that there is no standardized procedure for how to evaluate the algorithms. This thesis comprises an introduction and four papers describing methods for detection of the most common types of eye movements in eye-tracking data and strategies for evaluation of such methods. The most common types of eye movements are fixations, saccades, and smooth pursuit movements. In addition to these eye movements, the event post-saccadic oscillations, (PSO), is considered. The eye-tracking data in this thesis are recorded using both high- and low-speed eye-trackers. The first paper presents a method for detection of saccades and PSO. The saccades are detected using the acceleration signal and three specialized criteria based on directional information. In order to detect PSO, the interval after each saccade is modeled and the parameters of the model are used to determine whether PSO are present or not. The algorithm was evaluated by comparing the detection results to manual annotations and to the detection results of the most recent PSO detection algorithm. The results show that the algorithm is in good agreement with annotations, and has better performance than the compared algorithm. In the second paper, a method for separation of fixations and smooth pursuit movements is proposed. In the intervals between the detected saccades/PSO, the algorithm uses different spatial scales of the position signal in order to separate between the two types of eye movements. The algorithm is evaluated by computing five different performance measures, showing both general and detailed aspects of the discrimination performance. The performance of the algorithm is compared to the performance of a velocity and dispersion based algorithm, (I-VDT), to the performance of an algorithm based on principle component analysis, (I-PCA), and to manual annotations by two experts. The results show that the proposed algorithm performs considerably better than the compared algorithms. In the third paper, a method based on eye-tracking signals from both eyes is proposed for improved separation of fixations and smooth pursuit movements. The method utilizes directional clustering of the eye-tracking signals in combination with binary filters taking both temporal and spatial aspects of the eye-tracking signal into account. The performance of the method is evaluated using a novel evaluation strategy based on automatically detected moving objects in the video stimuli. The results show that the use of binocular information for separation of fixations and smooth pursuit movements is advantageous in static stimuli, without impairing the algorithm's ability to detect smooth pursuit movements in video and moving dot stimuli. The three first papers in this thesis are based on eye-tracking signals recorded using a stationary eye-tracker, while the fourth paper uses eye-tracking signals recorded using a mobile eye-tracker. In mobile eye-tracking, the user is allowed to move the head and the body, which affects the recorded data. In the fourth paper, a method for compensation of head movements using an inertial measurement unit, (IMU), combined with an event detector for lower sampling rate data is proposed. The event detection is performed by combining information from the eye-tracking signals with information about objects extracted from the scene video of the mobile eye-tracker. The results show that by introducing head movement compensation and information about detected objects in the scene video in the event detector, improved classification can be achieved. In summary, this thesis proposes an entire methodological framework for robust event detection which performs better than previous methods when analyzing eye-tracking signals recorded during dynamic stimuli, and also provides a methodology for performance evaluation of event detection algorithms
Eye Tracking in User Interfaces
Tato diplomová práce byla vytvořena během studijního pobytu na Uviversity of Estern Finland, Joensuu, Finsko. Tato diplomová práce se zabývá využitím technologie sledování pohledu neboli také sledování pohybu očí (Eye-Tracking) pro interakci člověk-počítač (Human-Computer Interaction (HCI)). Navržený a realizovaný systém mapuje pozici bodu pohledu/zájmu (the point of gaze), která odpovídá souřadnicím v souřadnicovém systému kamery scény do souřadnicového systému displeje. Zároveň tento systém kompenzuje pohyby uživatele a tím odstraňuje jeden z hlavních problémů využití sledování pohledu v HCI. Toho je dosaženo díky stanovení transformace mezi projektivním prostorem scény a projektivním prostorem displeje. Za použití význačných bodů (interesting points), které jsou nalezeny a popsány pomocí metody SURF, vyhledání a spárování korespondujících bodů a vypočítání homografie. Systém byl testován s využitím testovacích bodů, které byly rozložené po celé ploše displeje.This MSc Thesis was performed during a study stay at the University of Eastern Finland, Joensuu, Finland. This thesis presents the utilization of Eye-Tracking technology in Human-Computer Interaction (HCI). The proposed and implemented system is able to map co-ordinates in the plane of a scene camera, which correspond with co-ordinates of the point of gaze, into co-ordinates in the plane of a display device. In addition, the system compensates user's motions and thus removes one of main problems of use of Eye-Tracking in HCI. This is achieved by determination of a transformation between the projective space of scene and the projective space of display. Method is based on detection and description of interesting points by using SURF, matching of corresponding points and calculating of homography. The system has been tested by using testing points, which are spread over the display area.
Infrared Eclipses of the Strongly Irradiated Planet WASP-33b, and Oscillations of its Host Star
We observe two secondary eclipses of the strongly irradiated transiting
planet WASP-33b in the Ks band, and one secondary eclipse each at 3.6- and 4.5
microns using Warm Spitzer. This planet orbits an A5V delta-Scuti star that is
known to exhibit low amplitude non-radial p-mode oscillations at about
0.1-percent semi-amplitude. We detect stellar oscillations in all of our
infrared eclipse data, and also in one night of observations at J-band out of
eclipse. The oscillation amplitude, in all infrared bands except Ks, is about
the same as in the optical. However, the stellar oscillations in Ks band have
about twice the amplitude as seen in the optical, possibly because the
Brackett-gamma line falls in this bandpass. We use our best-fit values for the
eclipse depth, as well as the 0.9 micron eclipse observed by Smith et al., to
explore possible states of the exoplanetary atmosphere, based on the method of
Madhusudhan and Seager. On this basis we find two possible states for the
atmospheric structure of WASP-33b. One possibility is a non-inverted
temperature structure in spite of the strong irradiance, but this model
requires an enhanced carbon abundance (C/O>1). The alternative model has solar
composition, but an inverted temperature structure. Spectroscopy of the planet
at secondary eclipse, using a spectral resolution that can resolve the water
vapor band structure, should be able to break the degeneracy between these very
different possible states of the exoplanetary atmosphere. However, both of
those model atmospheres absorb nearly all of the stellar irradiance with
minimal longitudinal re-distribution of energy, strengthening the hypothesis of
Cowan et al. that the most strongly irradiated planets circulate energy poorly.
Our measurement of the central phase of the eclipse yields e*cos(omega)=0.0003
+/-0.00013, which we regard as being consistent with a circular orbit.Comment: 23 pages, 9 figures, 3 tables, accepted for the Astrophysical Journa
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
Astrometric performance of the Gemini multi-conjugate adaptive optics system in crowded fields
The Gemini Multi-conjugate adaptive optics System (GeMS) is a facility
instrument for the Gemini-South telescope. It delivers uniform,
near-diffraction-limited image quality at near-infrared wavelengths over a 2
arcminute field of view. Together with the Gemini South Adaptive Optics Imager
(GSAOI), a near-infrared wide field camera, GeMS/GSAOI's combination of high
spatial resolution and a large field of view will make it a premier facility
for precision astrometry. Potential astrometric science cases cover a broad
range of topics including exo-planets, star formation, stellar evolution, star
clusters, nearby galaxies, black holes and neutron stars, and the Galactic
center. In this paper, we assess the astrometric performance and limitations of
GeMS/GSAOI. In particular, we analyze deep, mono-epoch images, multi-epoch data
and distortion calibration. We find that for single-epoch, un-dithered data, an
astrometric error below 0.2 mas can be achieved for exposure times exceeding
one minute, provided enough stars are available to remove high-order
distortions. We show however that such performance is not reproducible for
multi-epoch observations, and an additional systematic error of ~0.4 mas is
evidenced. This systematic multi-epoch error is the dominant error term in the
GeMS/GSAOI astrometric error budget, and it is thought to be due to
time-variable distortion induced by gravity flexure.Comment: 16 pages, 22 figures, accepted for publication in MNRA
GAIA: Composition, Formation and Evolution of the Galaxy
The GAIA astrometric mission has recently been approved as one of the next
two `cornerstones' of ESA's science programme, with a launch date target of not
later than mid-2012. GAIA will provide positional and radial velocity
measurements with the accuracies needed to produce a stereoscopic and kinematic
census of about one billion stars throughout our Galaxy (and into the Local
Group), amounting to about 1 per cent of the Galactic stellar population.
GAIA's main scientific goal is to clarify the origin and history of our Galaxy,
from a quantitative census of the stellar populations. It will advance
questions such as when the stars in our Galaxy formed, when and how it was
assembled, and its distribution of dark matter. The survey aims for
completeness to V=20 mag, with accuracies of about 10 microarcsec at 15 mag.
Combined with astrophysical information for each star, provided by on-board
multi-colour photometry and (limited) spectroscopy, these data will have the
precision necessary to quantify the early formation, and subsequent dynamical,
chemical and star formation evolution of our Galaxy. Additional products
include detection and orbital classification of tens of thousands of
extra-Solar planetary systems, and a comprehensive survey of some 10^5-10^6
minor bodies in our Solar System, through galaxies in the nearby Universe, to
some 500,000 distant quasars. It will provide a number of stringent new tests
of general relativity and cosmology. The complete satellite system was
evaluated as part of a detailed technology study, including a detailed payload
design, corresponding accuracy assesments, and results from a prototype data
reduction development.Comment: Accepted by A&A: 25 pages, 8 figure
- …