32 research outputs found

    Comparaison et combinaison de rendus visuels et sonores pour la conception d'interfaces homme-machine (des facteurs humains aux stratégies de présentation à base de distorsion.)

    Get PDF
    Bien que de plus en plus de donnĂ©es sonores et audiovisuelles soient disponibles, la majoritĂ© des interfaces qui permettent d y accĂ©der reposent uniquement sur une prĂ©sentation visuelle. De nombreuses techniques de visualisation ont dĂ©jĂ  Ă©tĂ© proposĂ©es utilisant une prĂ©sentation simultanĂ©e de plusieurs documents et des distorsions permettant de mettre en relief l information plus pertinente. Nous proposons de dĂ©finir des Ă©quivalents auditifs pour la prĂ©sentation de plusieurs fichiers sonores en concurrence, et de combiner de façon optimale les stratĂ©gies audio et visuelles pour la prĂ©sentation de documents multimĂ©dia. Afin d adapter au mieux ces stratĂ©gies Ă  l utilisateur, nous avons dirigĂ© nos recherches sur l Ă©tude des processus perceptifs et attentionnels impliquĂ©s dans l Ă©coute et l observation d objets audiovisuels concurrents, en insistant sur les interactions entre les deux modalitĂ©s.Exploitant les paramĂštres de taille visuelle et de volume sonore, nous avons Ă©tendu le concept de lentille grossissante, utilisĂ©e dans les mĂ©thodes focus+contexte visuelles, aux modalitĂ©s auditive et audiovisuelle. A partir de ce concept, une application de navigation dans une collection de documents vidĂ©o a Ă©tĂ© dĂ©veloppĂ©e. Nous avons comparĂ© notre outil Ă  un autre mode de rendu dit de Pan&Zoom Ă  travers une Ă©tude d utilisabilitĂ©. Les rĂ©sultats, en particulier subjectifs, encouragent Ă  poursuivre vers des stratĂ©gies de prĂ©sentation multimodales associant un rendu audio aux rendus visuels dĂ©jĂ  disponibles.Une seconde Ă©tude a concernĂ© l identification de sons d environnement en milieu bruitĂ© en prĂ©sence d un contexte visuel. Le bruit simule la prĂ©sence de plusieurs sources sonores simultanĂ©es telles qu on pourrait les retrouver dans une interface oĂč les documents audio et audiovisuels sont prĂ©sentĂ©s ensemble. Les rĂ©sultats de cette expĂ©rience ont confirmĂ© l avantage de la multimodalitĂ© en condition de dĂ©gradation. De plus, au-delĂ  des buts premiers de la thĂšse, l Ă©tude a confirmĂ© l importance de la congruence sĂ©mantique entre les composantes visuelle et sonore pour la reconnaissance d objets et a permis d approfondir les connaissances sur la perception auditive des sons d environnement.Finalement, nous nous sommes intĂ©ressĂ©e aux processus attentionnels impliquĂ©s dans la recherche d un objet parmi plusieurs, en particulier au phĂ©nomĂšne de pop-out par lequel un objet saillant attire l attention automatiquement. En visuel, un objet net attire l attention au milieu d objets flous et certaines stratĂ©gies de prĂ©sentation visuelle exploitent dĂ©jĂ  ce paramĂštre visuel. Nous avons alors Ă©tendu la notion de flou aux modalitĂ©s auditives et audiovisuelles par analogie. Une sĂ©rie d expĂ©riences perceptives a confirmĂ© qu un objet net parmi des objets flous attire l attention, quelle que soit la modalitĂ©. Les processus de recherche et d identification sont alors accĂ©lĂ©rĂ©s quand l indice de nettetĂ© correspond Ă  la cible, mais ralentis quand il s agit d un distracteur, mettant ainsi en avant un phĂ©nomĂšne de guidage involontaire. Concernant l interaction intermodale, la combinaison redondante des flous audio et visuel s est rĂ©vĂ©lĂ©e encore plus efficace qu une prĂ©sentation unimodale. Les rĂ©sultats indiquent aussi qu une combinaison optimale n implique pas d appliquer obligatoirement une distorsion sur les deux modalitĂ©s.Although more and more sound and audiovisual data are available, the majority of access interfaces are solely based on a visual presentation. Many visualization techniques have been proposed that use simultaneous presentation of multiple documents and distortions to highlight the most relevant information. We propose to define equivalent audio technique for the presentation of several competing sound files, and optimally combine such audio and visual presentation strategies for multimedia documents. To better adapt these strategies to the user, we studied attentional and perceptual processes involved in listening and watching simultaneous audio-visual objects, focusing on the interactions between the two modalities.Combining visual size and sound level parameters, we extended the visual concept of magnifying lens to auditory and audiovisual modalities. Exploiting this concept, a navigation application in a video collection has been developed. We compared our tool with another rendering mode called Pan & Zoom through a usability study. Results, especially subjective results, encourage further research to develop multimodal presentation strategies by combining an audio rendering to the visual renderings already available.A second study concerned the identification of environmental sounds in a noisy environment in the presence of a visual context. The noise simulated the presence of multiple competing sounds as would be observed in an interface where several multimedia documents are presented together. The experimental results confirmed the multimodality advantage in condition of audio degradation. Moreover, beyond the primary goals of the thesis, this study confirms the importance of the semantic congruency between visual and auditory components for object recognition and provides deeper knowledge about the auditory perception of environmental sounds.Finally, we investigated attentional processes involved in the search of a specific object among many, especially the pop-out phenomenon whereby a salient object automatically attracts attention. In vision, an sharp object attracts attention among blurred objects and some visual strategies already exploit this parameter to display the information. We extended by analogy the concept of visual blur to auditory and audiovisual modalities. A serie of experiments confirmed that a perceptual object among blurred objects attracts attention, regardless of the modality. The identification and search process is then accelerated when the sharpness parameter is applied to the target, but slow when it is applied to a distractor. These results highlight an involuntary attraction effect. Concerning the crossmodal interaction, a redundant combination of audio and visual blur proved to be more effective than a unimodal presentation. Results also indicate that optimal combination does not necessarily require a distortion of both modalities.PARIS11-SCD-Bib. Ă©lectronique (914719901) / SudocSudocFranceF

    Influence of vision on short-term sound localization training with non-individualized HRTF

    Get PDF
    International audiencePrevious studies have demonstrated that it is possible for humans to adapt to new HRTF, non-individualized or altered, in a short time period. While natural adaptation, through sound exposure, takes several weeks [1], some training programs have been employed to accelerate adaptation and improve performance on sound localization in a few days (see [2] for a review). The majority of these training programs are based on audio-visual positional or response feedback learning [3] (participants correct their answer after seeing the target position), or on active learning, for example through audio-proprioceptive manipulations [4] (blindfolded participants actively explore the sphere around them by playing a mini sonified version of hot-and-cold game). While all training programs are based on a bimodal coupling (audio-vision [3] or audio-proprioception [4]), they are rarely based on a trimodal one. Therefore, if vision is not necessary for adaptation [4], and audio-visual training can even be less efficient than other methods [1,2], the role of vision in short-term audio localization training remains unclear, especially when action and proprioception are already involved. Our study compares two versions of active trainings: an audio-proprioceptive one and an audio-visuo-proprioceptive one. We hypothesize that combining all modalities leads to better adaptation inducing better performances and a longer remaining effect.The experiment is developed in virtual reality using a HTC Vive as a head- and hand-tracker. 3D audio spatialization is obtained through Steam Audio’s non-individualized built-in HRTF. When applicable, 3D visual information is displayed directly on the Vive screen. A total of 36 participants, equally distributed in 3 groups (G1 to G3), participate in this between-subject design study.G1 is a control group receiving no training session, while the 2 other groups receive a training session of 12 minutes during 3 consecutive days. All the participants also had to perform 5 sound localization tests (no feedback, hand-pointing techniques, 2 repetitions × 33 positions, frontal space): one before the experiment, one after each training session, and the last one 1 week after the first day in order to evaluate the remaining effect. G2 receives an audio-proprioceptive training as exposed in [4]. Participants have to freely scan the space around them with their hand-held Vive controller to find an animal sound hidden around them. The controller-to-target angular distance is sonified and spatialized at the controller position. No visual information is provided. G3 receives the same task as in G2 but, a visual representation of a sphere is also displayed at the hand position during all training sessions (audio-visuo-proprioceptive situation). We measure the angular error in azimuth and elevation during localization tests. Performances are also analyzed in interaural polar coordinate system to discuss front/back and up/down confusion errors. Data from training sessions are logged (total number of found animals and detailed sequence of hand positions) to evaluate how training and vision influence scanning strategy. The experimental phase is taking place right now (10 participants have completed it for the moment) and extends until the end of April. Complete results will be available for the final version of the paper in June. References [1] Carlile, S., and Blackman, T. Relearning auditory spectral cues for locations inside and outside the visual field. J. Assoc. Res. Otolaryngol. 15, 249–263 (2014)[2] Strelnikov, K., Rosito, M., and Barrone, P. Effect of audiovisual training on monaural spatial hearing in horizontal plane. PLoS ONE 6:e18344 (2011)[3] Mendonça, C. A review on auditory space adaptation to altered head-related cues. Front. Neurosci. 8, 219 (2014)[4] Parseihian, G. & Katz, B.F.G. Rapid head-related transfer function adaptation using a virtual auditory environment. J. Acous. Soc. of America 131, 2948–2957 (2012

    Head-mounted augmented reality to support reassurance and social interaction for autistic children with severe learning disabilities

    Get PDF
    Augmented Reality (AR) is promising to complement autism approaches, but so far has mainly focused on training socio-emotional abilities for autistic children with mild learning disabilities. To better consider autistic children with severe learning disabilities and complex needs (SLN), stakeholders advise using collaborative AR sensory-based mediation approaches. Magic Bubbles is a multisensory AR environment created based on stakeholders’ interviews, then adapted for a day hospital setting in collaboration with practitioners, and finally validated in terms of acceptability and usability for autistic children with SLN. In this paper, we report on our latest study that explores three main research questions: 1) To what extent can Magic Bubbles secure autistic children with SLN? 2) To what extent can Magic Bubbles prompt the dyadic relationship between an autistic child with SLN and a practitioner? 3) What is the overall quality of experience for autistic children with SLN when using Magic Bubbles? To answer these questions, seven autistic children with SLN participated in at least six weekly sessions over three months in a day hospital setting. Data collection and analysis used qualitative and quantitative methods, mainly drawing upon grounded theory to evaluate their experiences. Findings validate the three research questions, offer a detailed account of children’s experiences with AR, and outline future directions

    Comparison and combination of visual aud audio renderings to conceive human-computer interfaces : from human factors to distortion-based presentation strategies

    No full text
    Bien que de plus en plus de donnĂ©es sonores et audiovisuelles soient disponibles, la majoritĂ© des interfaces qui permettent d’y accĂ©der reposent uniquement sur une prĂ©sentation visuelle. De nombreuses techniques de visualisation ont dĂ©jĂ  Ă©tĂ© proposĂ©es utilisant une prĂ©sentation simultanĂ©e de plusieurs documents et des distorsions permettant de mettre en relief l’information plus pertinente. Nous proposons de dĂ©finir des Ă©quivalents auditifs pour la prĂ©sentation de plusieurs fichiers sonores en concurrence, et de combiner de façon optimale les stratĂ©gies audio et visuelles pour la prĂ©sentation de documents multimĂ©dia. Afin d’adapter au mieux ces stratĂ©gies Ă  l’utilisateur, nous avons dirigĂ© nos recherches sur l’étude des processus perceptifs et attentionnels impliquĂ©s dans l’écoute et l’observation d’objets audiovisuels concurrents, en insistant sur les interactions entre les deux modalitĂ©s.Exploitant les paramĂštres de taille visuelle et de volume sonore, nous avons Ă©tendu le concept de lentille grossissante, utilisĂ©e dans les mĂ©thodes focus+contexte visuelles, aux modalitĂ©s auditive et audiovisuelle. A partir de ce concept, une application de navigation dans une collection de documents vidĂ©o a Ă©tĂ© dĂ©veloppĂ©e. Nous avons comparĂ© notre outil Ă  un autre mode de rendu dit de Pan&Zoom Ă  travers une Ă©tude d’utilisabilitĂ©. Les rĂ©sultats, en particulier subjectifs, encouragent Ă  poursuivre vers des stratĂ©gies de prĂ©sentation multimodales associant un rendu audio aux rendus visuels dĂ©jĂ  disponibles.Une seconde Ă©tude a concernĂ© l’identification de sons d’environnement en milieu bruitĂ© en prĂ©sence d’un contexte visuel. Le bruit simule la prĂ©sence de plusieurs sources sonores simultanĂ©es telles qu’on pourrait les retrouver dans une interface oĂč les documents audio et audiovisuels sont prĂ©sentĂ©s ensemble. Les rĂ©sultats de cette expĂ©rience ont confirmĂ© l’avantage de la multimodalitĂ© en condition de dĂ©gradation. De plus, au-delĂ  des buts premiers de la thĂšse, l’étude a confirmĂ© l’importance de la congruence sĂ©mantique entre les composantes visuelle et sonore pour la reconnaissance d’objets et a permis d’approfondir les connaissances sur la perception auditive des sons d’environnement.Finalement, nous nous sommes intĂ©ressĂ©e aux processus attentionnels impliquĂ©s dans la recherche d’un objet parmi plusieurs, en particulier au phĂ©nomĂšne de « pop-out » par lequel un objet saillant attire l’attention automatiquement. En visuel, un objet net attire l’attention au milieu d’objets flous et certaines stratĂ©gies de prĂ©sentation visuelle exploitent dĂ©jĂ  ce paramĂštre visuel. Nous avons alors Ă©tendu la notion de flou aux modalitĂ©s auditives et audiovisuelles par analogie. Une sĂ©rie d’expĂ©riences perceptives a confirmĂ© qu’un objet net parmi des objets flous attire l’attention, quelle que soit la modalitĂ©. Les processus de recherche et d’identification sont alors accĂ©lĂ©rĂ©s quand l’indice de nettetĂ© correspond Ă  la cible, mais ralentis quand il s’agit d’un distracteur, mettant ainsi en avant un phĂ©nomĂšne de guidage involontaire. Concernant l’interaction intermodale, la combinaison redondante des flous audio et visuel s’est rĂ©vĂ©lĂ©e encore plus efficace qu’une prĂ©sentation unimodale. Les rĂ©sultats indiquent aussi qu’une combinaison optimale n’implique pas d’appliquer obligatoirement une distorsion sur les deux modalitĂ©s.Although more and more sound and audiovisual data are available, the majority of access interfaces are solely based on a visual presentation. Many visualization techniques have been proposed that use simultaneous presentation of multiple documents and distortions to highlight the most relevant information. We propose to define equivalent audio technique for the presentation of several competing sound files, and optimally combine such audio and visual presentation strategies for multimedia documents. To better adapt these strategies to the user, we studied attentional and perceptual processes involved in listening and watching simultaneous audio-visual objects, focusing on the interactions between the two modalities.Combining visual size and sound level parameters, we extended the visual concept of magnifying lens to auditory and audiovisual modalities. Exploiting this concept, a navigation application in a video collection has been developed. We compared our tool with another rendering mode called Pan & Zoom through a usability study. Results, especially subjective results, encourage further research to develop multimodal presentation strategies by combining an audio rendering to the visual renderings already available.A second study concerned the identification of environmental sounds in a noisy environment in the presence of a visual context. The noise simulated the presence of multiple competing sounds as would be observed in an interface where several multimedia documents are presented together. The experimental results confirmed the multimodality advantage in condition of audio degradation. Moreover, beyond the primary goals of the thesis, this study confirms the importance of the semantic congruency between visual and auditory components for object recognition and provides deeper knowledge about the auditory perception of environmental sounds.Finally, we investigated attentional processes involved in the search of a specific object among many, especially the “pop-out” phenomenon whereby a salient object automatically attracts attention. In vision, an sharp object attracts attention among blurred objects and some visual strategies already exploit this parameter to display the information. We extended by analogy the concept of visual blur to auditory and audiovisual modalities. A serie of experiments confirmed that a perceptual object among blurred objects attracts attention, regardless of the modality. The identification and search process is then accelerated when the sharpness parameter is applied to the target, but slow when it is applied to a distractor. These results highlight an involuntary attraction effect. Concerning the crossmodal interaction, a redundant combination of audio and visual blur proved to be more effective than a unimodal presentation. Results also indicate that optimal combination does not necessarily require a distortion of both modalities

    Comparaison et combinaison de rendus visuels et sonores pour la conception d'interfaces homme-machine : des facteurs humains aux stratégies de présentation à base de distorsion

    No full text
    Although more and more sound and audiovisual data are available, the majority of access interfaces are solely based on a visual presentation. Many visualization techniques have been proposed that use simultaneous presentation of multiple documents and distortions to highlight the most relevant information. We propose to define equivalent audio technique for the presentation of several competing sound files, and optimally combine such audio and visual presentation strategies for multimedia documents. To better adapt these strategies to the user, we studied attentional and perceptual processes involved in listening and watching simultaneous audio-visual objects, focusing on the interactions between the two modalities.Combining visual size and sound level parameters, we extended the visual concept of magnifying lens to auditory and audiovisual modalities. Exploiting this concept, a navigation application in a video collection has been developed. We compared our tool with another rendering mode called Pan & Zoom through a usability study. Results, especially subjective results, encourage further research to develop multimodal presentation strategies by combining an audio rendering to the visual renderings already available.A second study concerned the identification of environmental sounds in a noisy environment in the presence of a visual context. The noise simulated the presence of multiple competing sounds as would be observed in an interface where several multimedia documents are presented together. The experimental results confirmed the multimodality advantage in condition of audio degradation. Moreover, beyond the primary goals of the thesis, this study confirms the importance of the semantic congruency between visual and auditory components for object recognition and provides deeper knowledge about the auditory perception of environmental sounds.Finally, we investigated attentional processes involved in the search of a specific object among many, especially the “pop-out” phenomenon whereby a salient object automatically attracts attention. In vision, an sharp object attracts attention among blurred objects and some visual strategies already exploit this parameter to display the information. We extended by analogy the concept of visual blur to auditory and audiovisual modalities. A serie of experiments confirmed that a perceptual object among blurred objects attracts attention, regardless of the modality. The identification and search process is then accelerated when the sharpness parameter is applied to the target, but slow when it is applied to a distractor. These results highlight an involuntary attraction effect. Concerning the crossmodal interaction, a redundant combination of audio and visual blur proved to be more effective than a unimodal presentation. Results also indicate that optimal combination does not necessarily require a distortion of both modalities.Bien que de plus en plus de donnĂ©es sonores et audiovisuelles soient disponibles, la majoritĂ© des interfaces qui permettent d’y accĂ©der reposent uniquement sur une prĂ©sentation visuelle. De nombreuses techniques de visualisation ont dĂ©jĂ  Ă©tĂ© proposĂ©es utilisant une prĂ©sentation simultanĂ©e de plusieurs documents et des distorsions permettant de mettre en relief l’information plus pertinente. Nous proposons de dĂ©finir des Ă©quivalents auditifs pour la prĂ©sentation de plusieurs fichiers sonores en concurrence, et de combiner de façon optimale les stratĂ©gies audio et visuelles pour la prĂ©sentation de documents multimĂ©dia. Afin d’adapter au mieux ces stratĂ©gies Ă  l’utilisateur, nous avons dirigĂ© nos recherches sur l’étude des processus perceptifs et attentionnels impliquĂ©s dans l’écoute et l’observation d’objets audiovisuels concurrents, en insistant sur les interactions entre les deux modalitĂ©s.Exploitant les paramĂštres de taille visuelle et de volume sonore, nous avons Ă©tendu le concept de lentille grossissante, utilisĂ©e dans les mĂ©thodes focus+contexte visuelles, aux modalitĂ©s auditive et audiovisuelle. A partir de ce concept, une application de navigation dans une collection de documents vidĂ©o a Ă©tĂ© dĂ©veloppĂ©e. Nous avons comparĂ© notre outil Ă  un autre mode de rendu dit de Pan&Zoom Ă  travers une Ă©tude d’utilisabilitĂ©. Les rĂ©sultats, en particulier subjectifs, encouragent Ă  poursuivre vers des stratĂ©gies de prĂ©sentation multimodales associant un rendu audio aux rendus visuels dĂ©jĂ  disponibles.Une seconde Ă©tude a concernĂ© l’identification de sons d’environnement en milieu bruitĂ© en prĂ©sence d’un contexte visuel. Le bruit simule la prĂ©sence de plusieurs sources sonores simultanĂ©es telles qu’on pourrait les retrouver dans une interface oĂč les documents audio et audiovisuels sont prĂ©sentĂ©s ensemble. Les rĂ©sultats de cette expĂ©rience ont confirmĂ© l’avantage de la multimodalitĂ© en condition de dĂ©gradation. De plus, au-delĂ  des buts premiers de la thĂšse, l’étude a confirmĂ© l’importance de la congruence sĂ©mantique entre les composantes visuelle et sonore pour la reconnaissance d’objets et a permis d’approfondir les connaissances sur la perception auditive des sons d’environnement.Finalement, nous nous sommes intĂ©ressĂ©e aux processus attentionnels impliquĂ©s dans la recherche d’un objet parmi plusieurs, en particulier au phĂ©nomĂšne de « pop-out » par lequel un objet saillant attire l’attention automatiquement. En visuel, un objet net attire l’attention au milieu d’objets flous et certaines stratĂ©gies de prĂ©sentation visuelle exploitent dĂ©jĂ  ce paramĂštre visuel. Nous avons alors Ă©tendu la notion de flou aux modalitĂ©s auditives et audiovisuelles par analogie. Une sĂ©rie d’expĂ©riences perceptives a confirmĂ© qu’un objet net parmi des objets flous attire l’attention, quelle que soit la modalitĂ©. Les processus de recherche et d’identification sont alors accĂ©lĂ©rĂ©s quand l’indice de nettetĂ© correspond Ă  la cible, mais ralentis quand il s’agit d’un distracteur, mettant ainsi en avant un phĂ©nomĂšne de guidage involontaire. Concernant l’interaction intermodale, la combinaison redondante des flous audio et visuel s’est rĂ©vĂ©lĂ©e encore plus efficace qu’une prĂ©sentation unimodale. Les rĂ©sultats indiquent aussi qu’une combinaison optimale n’implique pas d’appliquer obligatoirement une distorsion sur les deux modalitĂ©s

    Audio-Visual Renderings for Multimedia Navigation

    Get PDF
    Presented at the 16th International Conference on Auditory Display (ICAD2010) on June 9-15, 2010 in Washington, DC.Our study focuses on multimodal information access to audiovisual databases, and evaluates the effect of combining the visual modality with audio information. To do so, we have developed two new exploration tools, which extend two information visualization techniques, namely Fisheye Lens (FL) and Pan&Zoom (PZ), to the auditory modality. The FL technique combined coherent distortion of graphics, sound space and volume. The PZ technique was designed without visual distortion but with low audio volume distortion. Both techniques were evaluated perceptually using a target finding task with both visual-only and audio-visual renderings. We did not find significant differences between audio-visual and visual-only conditions in terms of completion times. However we did find significant differences in participant’s qualitative evaluations of difficulty and efficiency. In addition, 63% of participants preferred the multimodal interface. For FL, the majority of participants judged the visual-only rendering as less efficient and appreciated the benefit of the audio rendering. But for PZ, they were satisfied with the visual-only rendering and evaluated the audio rendering as distracting. We conclude with future design specifications

    Livrable 1.3.4 du projet UnitedVR : Ă©tat de l’art sur la présence, l’immersion, la co-présence et l’incarnation en jeu vidéo et réalité mixte

    No full text
    Ce rapport est un livrable interne au projet FUI UnitedVR. Il est constituĂ© d'un Ă©tat de l'art sur les facteurs de prĂ©sence spatiale et sociale, d'immersion et d'incarnation pour la conception de jeux vidĂ©os en rĂ©alitĂ© virtuelle. Le projet UnitedVR a pour but de promouvoir et de dĂ©velopper des jeux vidĂ©os multi-joueursmulti-sites en rĂ©alitĂ© virtuelle. Au sein de ce projet, notre objectif est d’optimiser l’expĂ©rience dejeu (UX pour User eXperience) et le plaisir ressenti par les joueurs lors d’une partie avec de telsjeux VR. Ainsi, ce rapport vise d’une part Ă  dĂ©terminer les dimensions majeures de l’expĂ©riencede jeu en VR auxquelles il faudra prendre soin pour favoriser l’UX, et d’autre part, Ă  proposerdes mĂ©thodes d’évaluation de chacune de ces dimensions puis de l’UX globale aïŹn d’évaluer, auïŹnal, les jeux mis en place dans le cadre du projet. Pour cela nous nous concentrerons sur desdimensions dĂ©jĂ  bien Ă©tablies comme favorisant Ă  la fois l’UX de systĂšmes VR (qu’il s’agisse dejeux ou non), et l’expĂ©rience de jeu (qu’il s’agisse de jeux VR ou non).C’est le cas notamment de la prĂ©sence et de l’immersion. En eïŹ€et, une des principales raisonspour jouer Ă  des jeux vidĂ©os est le plaisir d’ĂȘtre immergĂ© dans un autre univers. Des conceptstels que la prĂ©sence spatiale ou le ïŹ‚ow, permettent alors de dĂ©crire le degrĂ© d’immersionressentie dans cet univers virtuel, et reprĂ©sentent donc des facteurs importants dans la mesurede l’expĂ©rience de jeu. Au coeur des mesures de l’UX en VR [Tcha-Tokey et al., 2016a], cesdimensions sont Ă©galement pleinement prises en compte dans des modĂšles et questionnaires del’expĂ©rience de jeu tels que le PENS (Player Experience of Need Satisfaction [Ryan et al., 2006]),le GEQ (Game Experience Questionnaire [IJsselsteijn et al., 2013]) ou encore le PXI (PlayereXperience Inventory [Abeele et al., 2020]), corrĂ©lĂ©s avec l’expĂ©rience globale de jeu d’aprĂšsl’étude de [Johnson et al., 2018]. La prĂ©sence et l’immersion (respectivement chapitre 1 et chapitre2) sont par ailleurs des facteurs essentiels Ă  l’évaluation de l’UX dans les expĂ©riences VR engĂ©nĂ©ral [Tcha-Tokey et al., 2016b].De mĂȘme, le projet UnitedVR vise le jeu Ă  plusieurs ; oĂč chacun est reprĂ©sentĂ© par un avatarvirtuel qu’il contrĂŽle. Nous nous intĂ©resserons donc Ă©galement aux sensations connexes Ă  celles deprĂ©sence, Ă  savoir la sensation de prĂ©sence sociale, encore appelĂ©e co-prĂ©sence (chapitre 3) quiconcerne la sensation de partager l’univers avec les autres joueurs, et Ă  celle de prĂ©sence de soi(self-presence) qui fait rĂ©fĂ©rence Ă  la sensation d’incarner son avatar (chapitre 4).Pour chacune de ces notions, les facteurs d’inïŹ‚uence et les diïŹ€Ă©rentes possibilitĂ©s de mesuressont prĂ©sentĂ©es puis Ă©tudiĂ©es dans le contexte du projet UnitedVR

    Multisense Blind Shooter: Auditory mobile application to assist blind and visually impaired people in target shooting

    No full text
    International audienceThe Multisense project aims at making sport, and in particular modern pentathlon, include blind and visually impaired individuals (BVI) in ordinary club. With that perspective in mind, a first prototype of a mobile application, called Multisense Blind Shooter, was developed to make laser-run, especially target shooting, accessible to BVI. The smartphone has to be attached on an ordinary gun thanks to a 3D printed fixation. The application is based on auditory feedbacks to inform the shooters about the gesture correction they have to apply to hit the target. Four designs of auditory feedback paradigm are proposed

    Immersive sonification of protein surface

    No full text
    International audienceThis paper presents our ongoing efforts to design an immersive sonification model to represent protein surfaces through 3D sound, in order to extend pre-existing protein visualisation methods without overloading visual perception. The protein surface is first discretized so each point of the surface is attached to a sound source spatialized in such a way the user is immersed in the center of the protein. We add a spherical filtering system, that the user can control, to select the surface points that would be rendered in order to reinforce the auditory interpretation of the 3D shape. Several questions, which can benefit the VR and HCI communities, are discussed both on audio and audiographical filtering consistency, and on multimodal integration of data coming from different point of view and point of listening in a 3D interactive space
    corecore