21,185 research outputs found

    Investigating the effectiveness of an efficient label placement method using eye movement data

    Get PDF
    This paper focuses on improving the efficiency and effectiveness of dynamic and interactive maps in relation to the user. A label placement method with an improved algorithmic efficiency is presented. Since this algorithm has an influence on the actual placement of the name labels on the map, it is tested if this efficient algorithms also creates more effective maps: how well is the information processed by the user. We tested 30 participants while they were working on a dynamic and interactive map display. Their task was to locate geographical names on each of the presented maps. Their eye movements were registered together with the time at which a given label was found. The gathered data reveal no difference in the user's response times, neither in the number and the duration of the fixations between both map designs. The results of this study show that the efficiency of label placement algorithms can be improved without disturbing the user's cognitive map. Consequently, we created a more efficient map without affecting its effectiveness towards the user

    Artificially created stimuli produced by a genetic algorithm using a saliency model as its fitness function show that Inattentional Blindness modulates performance in a pop-out visual search paradigm

    Get PDF
    Salient stimuli are more readily detected than less salient stimuli, and individual differences in such detection may be relevant to why some people fail to notice an unexpected stimulus that appears in their visual field whereas others do notice it. This failure to notice unexpected stimuli is termed 'Inattentional Blindness' and is more likely to occur when we are engaged in a resource-consuming task. A genetic algorithm is described in which artificial stimuli are created using a saliency model as its fitness function. These generated stimuli, which vary in their saliency level, are used in two studies that implement a pop-out visual search task to evaluate the power of the model to discriminate the performance of people who were and were not Inattentionally Blind (IB). In one study the number of orientational filters in the model was increased to check if discriminatory power and the saliency estimation for low-level images could be improved. Results show that the performance of the model does improve when additional filters are included, leading to the conclusion that low-level images may require a higher number of orientational filters for the model to better predict participants' performance. In both studies we found that given the same target patch image (i.e. same saliency value) IB individuals take longer to identify a target compared to non-IB individuals. This suggests that IB individuals require a higher level of saliency for low-level visual features in order to identify target patches

    SLS-PLAN-IT: A knowledge-based blackboard scheduling system for Spacelab life sciences missions

    Get PDF
    The primary scheduling tool in use during the Spacelab Life Science (SLS-1) planning phase was the operations research (OR) based, tabular form Experiment Scheduling System (ESS) developed by NASA Marshall. PLAN-IT is an artificial intelligence based interactive graphic timeline editor for ESS developed by JPL. The PLAN-IT software was enhanced for use in the scheduling of Spacelab experiments to support the SLS missions. The enhanced software SLS-PLAN-IT System was used to support the real-time reactive scheduling task during the SLS-1 mission. SLS-PLAN-IT is a frame-based blackboard scheduling shell which, from scheduling input, creates resource-requiring event duration objects and resource-usage duration objects. The blackboard structure is to keep track of the effects of event duration objects on the resource usage objects. Various scheduling heuristics are coded in procedural form and can be invoked any time at the user's request. The system architecture is described along with what has been learned with the SLS-PLAN-IT project

    The evolution of a visual-to-auditory sensory substitution device using interactive genetic algorithms

    Get PDF
    Sensory Substitution is a promising technique for mitigating the loss of a sensory modality. Sensory Substitution Devices (SSDs) work by converting information from the impaired sense (e.g. vision) into another, intact sense (e.g. audition). However, there are a potentially infinite number of ways of converting images into sounds and it is important that the conversion takes into account the limits of human perception and other user-related factors (e.g. whether the sounds are pleasant to listen to). The device explored here is termed “polyglot” because it generates a very large set of solutions. Specifically, we adapt a procedure that has been in widespread use in the design of technology but has rarely been used as a tool to explore perception – namely Interactive Genetic Algorithms. In this procedure, a very large range of potential sensory substitution devices can be explored by creating a set of ‘genes’ with different allelic variants (e.g. different ways of translating luminance into loudness). The most successful devices are then ‘bred’ together and we statistically explore the characteristics of the selected-for traits after multiple generations. The aim of the present study is to produce design guidelines for a better SSD. In three experiments we vary the way that the fitness of the device is computed: by asking the user to rate the auditory aesthetics of different devices (Experiment 1), by measuring the ability of participants to match sounds to images (Experiment 2) and the ability to perceptually discriminate between two sounds derived from similar images (Experiment 3). In each case the traits selected for by the genetic algorithm represent the ideal SSD for that task. Taken together, these traits can guide the design of a better SSD
    corecore