24 research outputs found

    Investigation on the Phantom Image Elevation Effect

    Get PDF
    Listening tests have been carried out in order to evaluate the phantom image elevation effect depending on horizontal stereophonic base angle. Seven ecologically valid sound sources as well as four noise sources were tested. Subjects judged the perceived image positions of phantom centre image created with seven loudspeaker base angles. Results generally showed that perceived images were elevated from front to above as the loudspeaker base angle increased up to around 180°. This tendency depended on the spectral characteristics of sound source. The perceived results are explained from both physical and cognitive points of view

    Evaluation of the Phantom Image Effect for Phantom Images

    Get PDF
    This paper introduces the author’s recent research on the elevation effect perceived with horizontal phantom images. Early research in stereophony suggests that a phantom centre image produced by two loudspeakers symmetrically placed from the listener position would be perceived in an elevated position, with its elevation angle increases as the loudspeaker base angle increases. In particular, an image presented from loudspeakers placed around the listener’s sides would be perceived overhead. With 3D audio formats employing height and overhead channels in mind, the aforementioned elevation effect is considered to be useful for creating a virtual overhead loudspeaker image, especially for sound effects using just ear-level loudspeakers (e.g. in downmix scenarios). Another important psychoacoustic principle relevant to 3D audio formats is the so- called ‘pitch-height’ effect, which suggests that the higher the frequency of a sound is the higher its image will be perceived. However, past research in this topic only considered loudspeakers placed in the median plane. From the above background, several subjective experiments have been conducted on the elevation of horizontally oriented phantom image. This paper first presents a vertical localisation test conducted with frontal stereo loudspeakers using octave-band noise stimuli. The results not only confirm the elevation effect for broadband noise, but also show the existence of an elevation effect for middle frequency bands. The second experiment introduced in this paper verifies the existence of the virtual overhead perception depending on loudspeaker base angle but also shows the effect heavily depends on the type of sound source

    Active Grasping Control of Virtual-Dexterous-Robot Hand with Open Inventor

    Get PDF

    Multisensory Augmented Reality in Cultural Heritage: Impact of Different Stimuli on Presence, Enjoyment, Knowledge and Value of the Experience

    Get PDF
    Little is known about the impact of the addition of each stimulus in multisensory augmented reality experiences in cultural heritage contexts. This paper investigates the impact of different sensory conditions on a user’s sense of presence, enjoyment, knowledge about the cultural site, and value of the experience. Five different multisensory conditions, namely, Visual, Visual + Audio, Visual + Smell, and Visual + Audio + Smell conditions, and regular visit referred to as None condition, were evaluated by a total of 60 random visitors distributed across the specified conditions. According to the results, the addition of particular types of stimuli created a different impact on the sense of presence subscale scores, namely, on spatial presence, involvement, and experienced realism, but did not influence the overall presence score. Overall, the results revealed that the addition of stimuli improved enjoyment and knowledge scores and did not affect the value of the experience scores. We concluded that each stimulus has a differential impact on the studied variables, demonstrating that its usage should depend on the goal of the experience: smell should be used to privilege realism and spatial presence, while audio should be adopted when the goal is to elicit involvement.info:eu-repo/semantics/publishedVersio

    Gaze-Hand Alignment:Combining Eye Gaze and Mid-Air Pointing for Interacting with Menus in Augmented Reality

    Get PDF
    Gaze and freehand gestures suit Augmented Reality as users can interact with objects at a distance without need for a separate input device. We propose Gaze-Hand Alignment as a novel multimodal selection principle, defined by concurrent use of both gaze and hand for pointing and alignment of their input on an object as selection trigger. Gaze naturally precedes manual action and is leveraged for pre-selection, and manual crossing of a pre-selected target completes the selection. We demonstrate the principle in two novel techniques, Gaze&Finger for input by direct alignment of hand and finger raised into the line of sight, and Gaze&Hand for input by indirect alignment of a cursor with relative hand movement. In a menu selection experiment, we evaluate the techniques in comparison with Gaze&Pinch and a hands-only baseline. The study showed the gaze-assisted techniques to outperform hands-only input, and gives insight into trade-offs in combining gaze with direct or indirect, and spatial or semantic freehand gestures

    Enhanced device-based 3D object manipulation technique for handheld mobile augmented reality

    Get PDF
    3D object manipulation is one of the most important tasks for handheld mobile Augmented Reality (AR) towards its practical potential, especially for realworld assembly support. In this context, techniques used to manipulate 3D object is an important research area. Therefore, this study developed an improved device based interaction technique within handheld mobile AR interfaces to solve the large range 3D object rotation problem as well as issues related to 3D object position and orientation deviations in manipulating 3D object. The research firstly enhanced the existing device-based 3D object rotation technique with an innovative control structure that utilizes the handheld mobile device tilting and skewing amplitudes to determine the rotation axes and directions of the 3D object. Whenever the device is tilted or skewed exceeding the threshold values of the amplitudes, the 3D object rotation will start continuously with a pre-defined angular speed per second to prevent over-rotation of the handheld mobile device. This over-rotation is a common occurrence when using the existing technique to perform large-range 3D object rotations. The problem of over-rotation of the handheld mobile device needs to be solved since it causes a 3D object registration error and a 3D object display issue where the 3D object does not appear consistent within the user’s range of view. Secondly, restructuring the existing device-based 3D object manipulation technique was done by separating the degrees of freedom (DOF) of the 3D object translation and rotation to prevent the 3D object position and orientation deviations caused by the DOF integration that utilizes the same control structure for both tasks. Next, an improved device-based interaction technique, with better performance on task completion time for 3D object rotation unilaterally and 3D object manipulation comprehensively within handheld mobile AR interfaces was developed. A pilot test was carried out before other main tests to determine several pre-defined values designed in the control structure of the proposed 3D object rotation technique. A series of 3D object rotation and manipulation tasks was designed and developed as separate experimental tasks to benchmark both the proposed 3D object rotation and manipulation techniques with existing ones on task completion time (s). Two different groups of participants aged 19-24 years old were selected for both experiments, with each group consisting sixteen participants. Each participant had to complete twelve trials, which came to a total 192 trials per experiment for all the participants. Repeated measure analysis was used to analyze the data. The results obtained have statistically proven that the developed 3D object rotation technique markedly outpaced existing technique with significant shorter task completion times of 2.04s shorter on easy tasks and 3.09s shorter on hard tasks after comparing the mean times upon all successful trials. On the other hand, for the failed trials, the 3D object rotation technique was 4.99% more accurate on easy tasks and 1.78% more accurate on hard tasks in comparison to the existing technique. Similar results were also extended to 3D object manipulation tasks with an overall 9.529s significant shorter task completion time of the proposed manipulation technique as compared to the existing technique. Based on the findings, an improved device-based interaction technique has been successfully developed to address the insufficient functionalities of the current technique

    Leveraging eXtented Reality & Human-Computer Interaction for User Experi- ence in 360◩ Video

    Get PDF
    EXtended Reality systems have resurged as a medium for work and entertainment. While 360o video has been characterized as less immersive than computer-generated VR, its realism, ease of use and affordability mean it is in widespread commercial use. Based on the prevalence and potential of the 360o video format, this research is focused on improving and augmenting the user experience of watching 360o video. By leveraging knowledge from Extented Reality (XR) systems and Human-Computer Interaction (HCI), this research addresses two issues affecting user experience in 360o video: Attention Guidance and Visually Induced Motion Sickness (VIMS). This research work relies on the construction of multiple artifacts to answer the de- fined research questions: (1) IVRUX, a tool for analysis of immersive VR narrative expe- riences; (2) Cue Control, a tool for creation of spatial audio soundtracks for 360o video, as well as enabling the collection and analysis of captured metrics emerging from the user experience; and (3) VIMS mitigation pipeline, a linear sequence of modules (including optical flow and visual SLAM among others) that control parameters for visual modi- fications such as a restricted Field of View (FoV). These artifacts are accompanied by evaluation studies targeting the defined research questions. Through Cue Control, this research shows that non-diegetic music can be spatialized to act as orientation for users. A partial spatialization of music was deemed ineffective when used for orientation. Addi- tionally, our results also demonstrate that diegetic sounds are used for notification rather than orientation. Through VIMS mitigation pipeline, this research shows that dynamic restricted FoV is statistically significant in mitigating VIMS, while mantaining desired levels of Presence. Both Cue Control and the VIMS mitigation pipeline emerged from a Research through Design (RtD) approach, where the IVRUX artifact is the product of de- sign knowledge and gave direction to research. The research presented in this thesis is of interest to practitioners and researchers working on 360o video and helps delineate future directions in making 360o video a rich design space for interaction and narrative.Sistemas de Realidade EXtendida ressurgiram como um meio de comunicação para o tra- balho e entretenimento. Enquanto que o vĂ­deo 360o tem sido caracterizado como sendo menos imersivo que a Realidade Virtual gerada por computador, o seu realismo, facili- dade de uso e acessibilidade significa que tem uso comercial generalizado. Baseado na prevalĂȘncia e potencial do formato de vĂ­deo 360o, esta pesquisa estĂĄ focada em melhorar e aumentar a experiĂȘncia de utilizador ao ver vĂ­deos 360o. Impulsionado por conhecimento de sistemas de Realidade eXtendida (XR) e Interacção Humano-Computador (HCI), esta pesquisa aborda dois problemas que afetam a experiĂȘncia de utilizador em vĂ­deo 360o: Orientação de Atenção e Enjoo de Movimento Induzido Visualmente (VIMS). Este trabalho de pesquisa Ă© apoiado na construção de mĂșltiplos artefactos para res- ponder as perguntas de pesquisa definidas: (1) IVRUX, uma ferramenta para anĂĄlise de experiĂȘncias narrativas imersivas em VR; (2) Cue Control, uma ferramenta para a criação de bandas sonoras de ĂĄudio espacial, enquanto permite a recolha e anĂĄlise de mĂ©tricas capturadas emergentes da experiencia de utilizador; e (3) canal para a mitigação de VIMS, uma sequĂȘncia linear de mĂłdulos (incluindo fluxo Ăłtico e SLAM visual entre outros) que controla parĂąmetros para modificaçÔes visuais como o campo de visĂŁo restringido. Estes artefactos estĂŁo acompanhados por estudos de avaliação direcionados para Ă s perguntas de pesquisa definidas. AtravĂ©s do Cue Control, esta pesquisa mostra que mĂșsica nĂŁo- diegĂ©tica pode ser espacializada para servir como orientação para os utilizadores. Uma espacialização parcial da mĂșsica foi considerada ineficaz quando usada para a orientação. Adicionalmente, os nossos resultados demonstram que sons diegĂ©ticos sĂŁo usados para notificação em vez de orientação. AtravĂ©s do canal para a mitigação de VIMS, esta pesquisa mostra que o campo de visĂŁo restrito e dinĂąmico Ă© estatisticamente significante ao mitigar VIMS, enquanto mantem nĂ­veis desejados de Presença. Ambos Cue Control e o canal para a mitigação de VIMS emergiram de uma abordagem de Pesquisa atravĂ©s do Design (RtD), onde o artefacto IVRUX Ă© o produto de conhecimento de design e deu direcção Ă  pesquisa. A pesquisa apresentada nesta tese Ă© de interesse para profissionais e investigadores tra- balhando em vĂ­deo 360o e ajuda a delinear futuras direçÔes em tornar o vĂ­deo 360o um espaço de design rico para a interação e narrativa

    Enhanced Virtuality: Increasing the Usability and Productivity of Virtual Environments

    Get PDF
    Mit stetig steigender Bildschirmauflösung, genauerem Tracking und fallenden Preisen stehen Virtual Reality (VR) Systeme kurz davor sich erfolgreich am Markt zu etablieren. Verschiedene Werkzeuge helfen Entwicklern bei der Erstellung komplexer Interaktionen mit mehreren Benutzern innerhalb adaptiver virtueller Umgebungen. Allerdings entstehen mit der Verbreitung der VR-Systeme auch zusĂ€tzliche Herausforderungen: Diverse EingabegerĂ€te mit ungewohnten Formen und Tastenlayouts verhindern eine intuitive Interaktion. DarĂŒber hinaus zwingt der eingeschrĂ€nkte Funktionsumfang bestehender Software die Nutzer dazu, auf herkömmliche PC- oder Touch-basierte Systeme zurĂŒckzugreifen. Außerdem birgt die Zusammenarbeit mit anderen Anwendern am gleichen Standort Herausforderungen hinsichtlich der Kalibrierung unterschiedlicher Trackingsysteme und der Kollisionsvermeidung. Beim entfernten Zusammenarbeiten wird die Interaktion durch Latenzzeiten und Verbindungsverluste zusĂ€tzlich beeinflusst. Schließlich haben die Benutzer unterschiedliche Anforderungen an die Visualisierung von Inhalten, z.B. GrĂ¶ĂŸe, Ausrichtung, Farbe oder Kontrast, innerhalb der virtuellen Welten. Eine strikte Nachbildung von realen Umgebungen in VR verschenkt Potential und wird es nicht ermöglichen, die individuellen BedĂŒrfnisse der Benutzer zu berĂŒcksichtigen. Um diese Probleme anzugehen, werden in der vorliegenden Arbeit Lösungen in den Bereichen Eingabe, Zusammenarbeit und Erweiterung von virtuellen Welten und Benutzern vorgestellt, die darauf abzielen, die Benutzerfreundlichkeit und ProduktivitĂ€t von VR zu erhöhen. ZunĂ€chst werden PC-basierte Hardware und Software in die virtuelle Welt ĂŒbertragen, um die Vertrautheit und den Funktionsumfang bestehender Anwendungen in VR zu erhalten. Virtuelle Stellvertreter von physischen GerĂ€ten, z.B. Tastatur und Tablet, und ein VR-Modus fĂŒr Anwendungen ermöglichen es dem Benutzer reale FĂ€higkeiten in die virtuelle Welt zu ĂŒbertragen. Des Weiteren wird ein Algorithmus vorgestellt, der die Kalibrierung mehrerer ko-lokaler VR-GerĂ€te mit hoher Genauigkeit und geringen Hardwareanforderungen und geringem Aufwand ermöglicht. Da VR-Headsets die reale Umgebung der Benutzer ausblenden, wird die Relevanz einer Ganzkörper-Avatar-Visualisierung fĂŒr die Kollisionsvermeidung und das entfernte Zusammenarbeiten nachgewiesen. DarĂŒber hinaus werden personalisierte rĂ€umliche oder zeitliche Modifikationen vorgestellt, die es erlauben, die Benutzerfreundlichkeit, Arbeitsleistung und soziale PrĂ€senz von Benutzern zu erhöhen. Diskrepanzen zwischen den virtuellen Welten, die durch persönliche Anpassungen entstehen, werden durch Methoden der Avatar-Umlenkung (engl. redirection) kompensiert. Abschließend werden einige der Methoden und Erkenntnisse in eine beispielhafte Anwendung integriert, um deren praktische Anwendbarkeit zu verdeutlichen. Die vorliegende Arbeit zeigt, dass virtuelle Umgebungen auf realen FĂ€higkeiten und Erfahrungen aufbauen können, um eine vertraute und einfache Interaktion und Zusammenarbeit von Benutzern zu gewĂ€hrleisten. DarĂŒber hinaus ermöglichen individuelle Erweiterungen des virtuellen Inhalts und der Avatare EinschrĂ€nkungen der realen Welt zu ĂŒberwinden und das Erlebnis von VR-Umgebungen zu steigern

    Sparse Volumetric Deformation

    Get PDF
    Volume rendering is becoming increasingly popular as applications require realistic solid shape representations with seamless texture mapping and accurate filtering. However rendering sparse volumetric data is difficult because of the limited memory and processing capabilities of current hardware. To address these limitations, the volumetric information can be stored at progressive resolutions in the hierarchical branches of a tree structure, and sampled according to the region of interest. This means that only a partial region of the full dataset is processed, and therefore massive volumetric scenes can be rendered efficiently. The problem with this approach is that it currently only supports static scenes. This is because it is difficult to accurately deform massive amounts of volume elements and reconstruct the scene hierarchy in real-time. Another problem is that deformation operations distort the shape where more than one volume element tries to occupy the same location, and similarly gaps occur where deformation stretches the elements further than one discrete location. It is also challenging to efficiently support sophisticated deformations at hierarchical resolutions, such as character skinning or physically based animation. These types of deformation are expensive and require a control structure (for example a cage or skeleton) that maps to a set of features to accelerate the deformation process. The problems with this technique are that the varying volume hierarchy reflects different feature sizes, and manipulating the features at the original resolution is too expensive; therefore the control structure must also hierarchically capture features according to the varying volumetric resolution. This thesis investigates the area of deforming and rendering massive amounts of dynamic volumetric content. The proposed approach efficiently deforms hierarchical volume elements without introducing artifacts and supports both ray casting and rasterization renderers. This enables light transport to be modeled both accurately and efficiently with applications in the fields of real-time rendering and computer animation. Sophisticated volumetric deformation, including character animation, is also supported in real-time. This is achieved by automatically generating a control skeleton which is mapped to the varying feature resolution of the volume hierarchy. The output deformations are demonstrated in massive dynamic volumetric scenes
    corecore