15 research outputs found
Recommended from our members
What's on TV? Detecting age-related neurodegenerative eye disease using eye movement scanpaths
Purpose: We test the hypothesis that age-related neurodegenerative eye disease can be detected by examining patterns of eye movement recorded whilst a person naturally watches a movie.
Methods: Thirty-two elderly people with healthy vision (median age: 70, interquartile range [IQR] 64â75 years) and 44 patients with a clinical diagnosis of glaucoma (median age: 69, IQR 63â77 years) had standard vision examinations including automated perimetry. Disease severity was measured using a standard clinical measure (visual field mean deviation; MD). All study participants viewed three unmodified TV and film clips on a computer set up incorporating the Eyelink 1000 eyetracker (SR Research, Ontario, Canada). Eye movement scanpaths were plotted using novel methods that first filtered the data and then generated saccade density maps. Maps were then subjected to a feature extraction analysis using kernel principal component analysis (KPCA). Features from the KPCA were then classified using a standard machine based classifier trained and tested by a 10-fold cross validation which was repeated 100 times to estimate the confidence interval (CI) of classification sensitivity and specificity. Results: Patients had a range of disease severity from early to advanced (median [IQR] right eye and left eye MD was â7 [â13 to â5] dB and â9 [â15 to â4] dB, respectively). Average sensitivity for correctly identifying a glaucoma patient at a fixed specificity of 90% was 79% (95% CI: 58â86%). The area under the Receiver Operating Characteristic curve was 0.84 (95% CI: 0.82â0.87).
Conclusions: Huge data from scanpaths of eye movements recorded whilst people freely watch TV type films can be processed into maps that contain a signature of vision loss. In this proof of principle study we have demonstrated that a group of patients with age-related neurodegenerative eye disease can be reasonably well separated from a group of healthy peers by considering these eye movement signatures alone
Automated usability analysis and visualisation of eye tracking data
Usability is a critical aspect of the success of any application. It can be the deciding factor
for which an application is chosen and can have a dramatic effect on the productivity of
users. Eye tracking has been successfully utilised as a usability evaluation tool, because of
the strong link between where a person is looking and their cognitive activity. Currently,
eye tracking usability evaluation is a timeâintensive process, requiring extensive human
expert analysis. It is therefore only feasible for smallâscale usability testing.
This study developed a method to reduce the time expert analysts spend interpreting
eye tracking results, by automating part of the analysis process. This was accomplished
by comparing the visual strategy of a benchmark user against the visual strategies of the
remaining participants. A comparative study demonstrates how the resulting metrics
highlight the same tasks with usability issues, as identified by an expert analyst. The
method also produces visualisations to assist the expert in identifying problem areas on
the user interface.
Eye trackers are now available for various mobile devices, providing the opportunity to
perform largeâscale, remote eye tracking usability studies. The proposed approach makes
it feasible to analyse these extensive eye tracking datasets and improve the usability of
an application.Dissertation (MSc)--University of Pretoria, 2014.Computer Scienceunrestricte
Clearing the Clouds: Extracting 3D information from amongst the noise
Advancements permitting the rapid extraction of 3D point clouds from a variety of imaging modalities across the global landscape have provided a vast collection of high fidelity digital surface models. This has created a situation with unprecedented overabundance of 3D observations which greatly outstrips our current capacity to manage and infer actionable information. While years of research have removed some of the manual analysis burden for many tasks, human analysis is still a cornerstone of 3D scene exploitation. This is especially true for complex tasks which necessitate comprehension of scale, texture and contextual learning. In order to ameliorate the interpretation burden and enable scientific discovery from this volume of data, new processing paradigms are necessary to keep pace.
With this context, this dissertation advances fundamental and applied research in 3D point cloud data pre-processing and deep learning from a variety of platforms. We show that the representation of 3D point data is often not ideal and sacrifices fidelity, context or scalability. First ground scanning terrestrial LIght Detection And Ranging (LiDAR) models are shown to have an inherent statistical bias, and present a state of the art method for correcting this, while preserving data fidelity and maintaining semantic structure. This technique is assessed in the dense canopy of Micronesia, with our technique being the best at retaining high levels of detail under extreme down-sampling (\u3c 1%). Airborne systems are then explored with a method which is presented to pre-process data to preserve a global contrast and semantic content in deep learners. This approach is validated with a building footprint detection task from airborne imagery captured in Eastern TN from the 3D Elevation Program (3DEP), our approach was found to achieve significant accuracy improvements over traditional techniques. Finally, topography data spanning the globe is used to assess past and previous global land cover change. Utilizing Shuttle Radar Topography Mission (SRTM) and Moderate Resolution Imaging Spectroradiometer (MODIS) data, paired with the airborne preprocessing technique described previously, a model for predicting land-cover change from topography observations is described.
The culmination of these efforts have the potential to enhance the capabilities of automated 3D geospatial processing, substantially lightening the burden of analysts, with implications improving our responses to global security, disaster response, climate change, structural design and extraplanetary exploration
Recommended from our members
Eye movements, search and perception of visual field defects in glaucoma
Glaucoma is a progressive disease of the optic nerve that can result in irreversible loss of visual function and impairment in everyday visual tasks. The experimental studies described in this thesis primarily aim to investigate the performance of people with glaucoma on search and other visual tasks whilst simultaneously monitoring eye movements, making comparison with age-related visually healthy people. In an experiment focussing on visual search, a patient group (n=30) took significantly longer on average to find a target in images of everyday scenes than controls (n=30). Furthermore, comparison of eye movements made by the participants during this task revealed there was a statistically significant reduction (6%) in saccade rate in the patients compared to the controls, and that saccade rate correlated with performance. Similar differences in eye movements were observed when the same groups passively viewed a selection of images in a slideshow. A bivariate contour ellipse (BCE) analysis revealed that, on average, patients viewed smaller regions of the images compared to the controls. Eye movement differences between patients and controls were also examined in a different cohort of people with glaucoma (n=14) and visually healthy controls (n=22) whilst they watched a selection of Hazard Perception Test driving films. Saccade rate of the patients was found to increase by 9%, though results from the BCE analysis suggested the average size of viewing area was similar in both groups. Finally, a novel interview-based study of 50 people with glaucoma provides evidence that patients do not perceive their visual field defect as a black âtunnelâ effect, or as âblack patchesâ, but more like blurred regions: this finding may, for example, impact on how glaucomatous visual field loss is depicted in patient information about the condition. In conclusion, the results from this thesis show how visual loss from glaucoma influences how patients perceive and react to their visual environment. The principal findings from the studies described in this thesis also show, for the first time, that eye movement analysis could provide a window into the functional deficits associated with glaucoma
Additive Manufacturing (AM) of Metallic Alloys
The introduction of metal AM processes in such industrial sectors as the aerospace, automotive, defense, jewelry, medical and tool-making fields, has led to a significant reduction in waste material and in the lead times of the components, innovative designs with higher strength, lower weight, and fewer potential failure points from joining features. This Special Issue on âAdditive Manufacturing (AM) of Metallic Alloysâ contains a mixture of review articles and original contributions on some problems that limit the wider uptake and exploitation of metals in AM
Recommended from our members
The effect on learners strategies of varying computer-based representations: evidence from gazes, actions, utterances and sketches
Computer-based Multiple External Representations (MERs) have been found in some cases to help and in others to hinder the learning process. This thesis examines how varying the external representations that are presented in a computer environment influences the strategies that learners choose when tackling mathematics tasks. It has been noted (Ainsworth, 2006) that learners fail to transfer insights from one representation to another. Previous work analysing video data of learners' problem-solving with computer-based MERs emphasises the need to identify which representation is being considered by a learner as utterances are made, and to examine more closely learners' movement between representations. This research focuses on the relationship between strategy and representation during learners' problem solving.
A set of analytical techniques was developed to characterise learner strategies, to identify how different computer-based MERs influence strategy choices, and to explore how these choices change over the course of task completion. Rich data were collected using a variety of technologies: learners' shifts in attention were recorded using an unobtrusive eye-tracking device and screen capture software; keyboard and mouse actions were logged automatically; utterances and gestures were video recorded; notes and sketches were recorded in real-time using a Tablet PC. This research suggests how integrated analysis of learners' gazes, actions, writing, sketches and utterances can better illuminate subtle cognitive strategies.
The study involved completion of three tasks by eighteen participants using multiple mathematical representations (numbers, graphs and algebra) presented in different computer-based 'instantiations': Static (non-moving, non-changing, non-Interactive); Dynamic (capable of animation following keyboard inputs); Interactive (directly manipulable using a mouse).
Having computer-based MERs available to learners provides an opportunity to use representations with which they are comfortable. A detailed analysis showed that both representation and instantiation have an impact on strategy choice. It identified differences in expression of inferences, construction of visual images, and attention to representations between different types of instantiation. One of the important findings of the research is that learners are less likely to use imagining strategies when representational instantiation is Interactive. These results may provide some explanation of how interactivity helps or hinders learners' understanding of multiple representations
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149â164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task