2 research outputs found

    GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation - A Framework

    Full text link
    In the last few decades, a variety of assistive technologies (AT) have been developed to improve the quality of life of visually impaired people. These include providing an independent means of travel and thus better access to education and places of work. There is, however, no metric for comparing and benchmarking these technologies, especially multimodal systems. In this dissertation, we propose GIVE-ME: Gamification In Virtual Environments for Multimodal Evaluation, a framework which allows for developers and consumers to assess their technologies in a functional and objective manner. This framework is based on three foundations: multimodality, gamification, and virtual reality. It facilitates fuller and more controlled data collection, rapid prototyping and testing of multimodal ATs, benchmarking heterogeneous ATs, and conversion of these evaluation tools into simulation or training tools. Our contributions include: (1) a unified evaluation framework: via developing an evaluative approach for multimodal visual ATs; (2) a sustainable evaluation: by employing virtual environments and gamification techniques to create engaging games for users, while collecting experimental data for analysis; (3) a novel psychophysics evaluation: enabling researchers to conduct psychophysics evaluation despite the experiment being a navigational task; and (4) a novel collaborative environment: enabling developers to rapid prototype and test their ATs with users in an early stakeholder involvement that fosters communication between developers and users. This dissertation first provides a background in assistive technologies and motivation for the framework. This is followed by detailed description of the GIVE-ME Framework, with particular attention to its user interfaces, foundations, and components. Then four applications are presented that describe how the framework is applied. Results and discussions are also presented for each application. Finally, both conclusions and a few directions for future work are presented in the last chapter

    Apport de la vision par ordinateur dans l'utilisabilité des neuroprothèses visuelles

    Get PDF
    L'OMS estime que 45 millions de personnes dans le monde sont aveugles. Avec le vieillissement de la population, ce chiffre ne cesse de progresser car la cécité touche majoritairement les personnes âgées. Les neuroprothèses visuelles ont pour objectif de restaurer une forme de vision. Ces systèmes convertissent les informations de la scène visuelle en percepts lumineux via des microstimulations électriques du système visuel. La perception visuelle ainsi générée consiste en un ensemble restreint de phosphènes. Ces systèmes sont, à ce jour, inutilisables dans un environnement naturel : l'information visuelle restituée est insuffisante pour que les personnes implantées puissent se déplacer, localiser des objets et les reconnaître. Au cours des dernières décennies, la vision par ordinateur a connu d'énormes avancées, grâce aux améliorations apportées aux algorithmes de traitement d'images et à l'augmentation de la puissance de calcul disponible. Il est désormais possible de localiser de manière fiable des objets, des visages ou du texte dans un environnement naturel. Or, la plupart des neuroprothèses visuelles intègrent une caméra facilement associable à un module de traitement d'images. Partant de ces constatations, nous avons montré qu'il est possible d'améliorer l'utilisabilité de ces systèmes, en utilisant des algorithmes de traitement d'images performants. En détectant des zones d'intérêt dans une scène naturelle et en les restituant à l'utilisateur par le biais d'un nombre limité de phosphènes, nos résultats indiquent qu'il est possible de restaurer des comportements visuo-moteurs adaptés : localisation d'objets, de visages ou encore de textes.The WHO estimates that 45 million people worldwide are blind. This figure is rapidly increasing because of the ageing of the world population, as blindness primarily affects elderly people. Visual neuroprostheses aim at restoring a sort of vision. These systems convert visual information captured by a camera into dots-like percepts via electrical microstimulation of the visual system. The evoked visual perception corresponds to a black and white image with a few dozen of pixels with gaps separating them. Although these systems give great hope to blind people, they are still inefficient in a natural environment: the restored visual information is too coarse to allow complex functions such as navigation, object localization and recognition, or reading at a convenient speed. Over the last decades, computer vision has been steadily improving, thanks to the development of new image processing algorithms and the increase of processing power. For instance, this is now possible to localize objects, faces or texts in real outdoor conditions. Interestingly, most of the current visual neuroprostheses include an external camera making it possible to process the input images in order to adapt the phosphenes display. In the current work, we showed that real-time image processing can improve the usability of low resolution visual neuroprostheses relying on the extraction of high-level information from the input images. Indeed, our results showed that the augmentation of the phosphene display with a limited number of phosphenes allows restoring visuomotor behaviors, such as localizing pertinent objects, faces or texts within a natural scene
    corecore