81 research outputs found

    Oral messages improve visual search

    Get PDF
    Input multimodality combining speech and hand gestures has motivated numerous usability studies. Contrastingly, issues relating to the design and ergonomic evaluation of multimodal output messages combining speech with visual modalities have not yet been addressed extensively. The experimental study presented here addresses one of these issues. Its aim is to assess the actual efficiency and usability of oral system messages including brief spatial information for helping users to locate objects on crowded displays rapidly. Target presentation mode, scene spatial structure and task difficulty were chosen as independent variables. Two conditions were defined: the visual target presentation mode (VP condition) and the multimodal target presentation mode (MP condition). Each participant carried out two blocks of visual search tasks (120 tasks per block, and one block per condition). Scene target presentation mode, scene structure and task difficulty were found to be significant factors. Multimodal target presentation proved to be more efficient than visual target presentation. In addition, participants expressed very positive judgments on multimodal target presentations which were preferred to visual presentations by a majority of participants. Besides, the contribution of spatial messages to visual search speed and accuracy was influenced by scene spatial structure and task difficulty: (i) messages improved search efficiency to a lesser extent for 2D array layouts than for some other symmetrical layouts, although the use of 2D arrays for displaying pictures is currently prevailing; (ii) message usefulness increased with task difficulty. Most of these results are statistically significant.Comment: 4 page

    How really effective are Multimodal Hints in enhancing Visual Target Spotting? Some evidence from a usability study

    Get PDF
    The main aim of the work presented here is to contribute to computer science advances in the multimodal usability area, in-as-much as it addresses one of the major issues relating to the generation of effective oral system messages: how to design messages which effectively help users to locate specific graphical objects in information visualisations? An experimental study was carried out to determine whether oral messages including coarse information on the locations of graphical objects on the current display may facilitate target detection tasks sufficiently for making it worth while to integrate such messages in GUIs. The display spatial layout varied in order to test the influence of visual presentation structure on the contribution of these messages to facilitating visual search on crowded displays. Finally, three levels of task difficulty were defined, based mainly on the target visual complexity and the number of distractors in the scene. The findings suggest that spatial information messages improve participants' visual search performances significantly; they are more appropriate to radial structures than to matrix, random and elleptic structures; and, they are particularly useful for performing difficult visual search tasks.Comment: 9 page

    A Comparison of Paper Sketch and Interactive Wireframe by Eye Movements Analysis, Survey, and Interview

    Get PDF
    Eye movement-based analyses have been extensively performed on graphical user interface designs, mainly on high-fidelity prototypes such as coded prototypes. However, practitioners usually initiate the development life cycle with low-fidelity prototypes, such as mock-ups or sketches. Since little or no eye movement analysis has been performed on the latter, would eye tracking transpose its benefits from high- to low-fidelity prototypes and produce different results? To bridge this gap, we performed an eye movement-based analysis that compares gaze point indexes, gaze event types and durations, fixation, and saccade indexes produced by N=8N{=}8 participants between two treatments, a paper prototype vs. a wireframe. The paper also reports a qualitative analysis based on the answers provided by these participants in a semi-directed interview and on a perceived usability questionnaire with 14 items. Due to its interactivity, the wireframe seems to foster a more exploratory approach to design (e.g., testing and navigating more extensively) than the paper prototype

    The Agile UX Development Lifecycle: Combining Formative Usability and Agile Methods

    Get PDF
    This paper contributes a method variation that helps cross-functional teams combine both formative usability and agile methods to develop interactive systems. Both methods are iterative, continuous and focus on delivering value to users, which makes their combination possible. The “agile UX development lifecycle” supports and facilitates the synchronization of the steps involved in both formative usability and agile sprints in an operable manner and is intended for design and development settings. We present a case study that illustrates the extent to which this tool meets the needs of real-world cross-functional teams, describing the gains in efficiency it can provide but also guidelines for increasing the benefits gained from this combination in design and development settings

    Assistance orale à la recherche visuelle - étude expérimentale de l'apport d'indications spatiales à la détection de cibles

    Get PDF
    http://www.hcirn.com/res/period/rihm.phpNational audienceThis paper describes an experimental study that aims at assessing the actual contribution of voice system messages to visual search efficiency and comfort. Messages which include spatial information on the target location are meant to support search for familiar targets in collections of photographs (30 per display). 24 participants carried out 240 visual search tasks in two conditions differing from each other in initial target presentation only. The isolated target was presented either simultaneously with an oral message (multimodal presentation, MP), or without any message (visual presentation, VP). Averaged target selection times were thrice longer and errors almost twice more frequent in the VP condition than in the MP condition. In addition, the contribution of spatial messages to visual search rapidity and accuracy was influenced by display layout and task difficulty. Most results are statistically significant. Besides, subjective judgments indicate that oral messages were well accepted

    Feature extraction and selection for objective gait analysis and fall risk assessment by accelerometry

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Falls in the elderly is nowadays a major concern because of their consequences on elderly general health and moral states. Moreover, the aging of the population and the increasing life expectancy make the prediction of falls more and more important. The analysis presented in this article makes a first step in this direction providing a way to analyze gait and classify hospitalized elderly fallers and non-faller. This tool, based on an accelerometer network and signal processing, gives objective informations about the gait and does not need any special gait laboratory as optical analysis do. The tool is also simple to use by a non expert and can therefore be widely used on a large set of patients.</p> <p>Method</p> <p>A population of 20 hospitalized elderlies was asked to execute several classical clinical tests evaluating their risk of falling. They were also asked if they experienced any fall in the last 12 months. The accelerations of the limbs were recorded during the clinical tests with an accelerometer network distributed on the body. A total of 67 features were extracted from the accelerometric signal recorded during a simple 25 m walking test at comfort speed. A feature selection algorithm was used to select those able to classify subjects at risk and not at risk for several classification algorithms types.</p> <p>Results</p> <p>The results showed that several classification algorithms were able to discriminate people from the two groups of interest: fallers and non-fallers hospitalized elderlies. The classification performances of the used algorithms were compared. Moreover a subset of the 67 features was considered to be significantly different between the two groups using a t-test.</p> <p>Conclusions</p> <p>This study gives a method to classify a population of hospitalized elderlies in two groups: at risk of falling or not at risk based on accelerometric data. This is a first step to design a risk of falling assessment system that could be used to provide the right treatment as soon as possible before the fall and its consequences. This tool could also be used to evaluate the risk several times during the revalidation procedure.</p
    corecore