1,049 research outputs found

    Impact of Imaging and Distance Perception in VR Immersive Visual Experience

    Get PDF
    Virtual reality (VR) headsets have evolved to include unprecedented viewing quality. Meanwhile, they have become lightweight, wireless, and low-cost, which has opened to new applications and a much wider audience. VR headsets can now provide users with greater understanding of events and accuracy of observation, making decision-making faster and more effective. However, the spread of immersive technologies has shown a slow take-up, with the adoption of virtual reality limited to a few applications, typically related to entertainment. This reluctance appears to be due to the often-necessary change of operating paradigm and some scepticism towards the "VR advantage". The need therefore arises to evaluate the contribution that a VR system can make to user performance, for example to monitoring and decision-making. This will help system designers understand when immersive technologies can be proposed to replace or complement standard display systems such as a desktop monitor. In parallel to the VR headsets evolution there has been that of 360 cameras, which are now capable to instantly acquire photographs and videos in stereoscopic 3D (S3D) modality, with very high resolutions. 360° images are innately suited to VR headsets, where the captured view can be observed and explored through the natural rotation of the head. Acquired views can even be experienced and navigated from the inside as they are captured. The combination of omnidirectional images and VR headsets has opened to a new way of creating immersive visual representations. We call it: photo-based VR. This represents a new methodology that combines traditional model-based rendering with high-quality omnidirectional texture-mapping. Photo-based VR is particularly suitable for applications related to remote visits and realistic scene reconstruction, useful for monitoring and surveillance systems, control panels and operator training. The presented PhD study investigates the potential of photo-based VR representations. It starts by evaluating the role of immersion and user’s performance in today's graphical visual experience, to then use it as a reference to develop and evaluate new photo-based VR solutions. With the current literature on photo-based VR experience and associated user performance being very limited, this study builds new knowledge from the proposed assessments. We conduct five user studies on a few representative applications examining how visual representations can be affected by system factors (camera and display related) and how it can influence human factors (such as realism, presence, and emotions). Particular attention is paid to realistic depth perception, to support which we develop target solutions for photo-based VR. They are intended to provide users with a correct perception of space dimension and objects size. We call it: true-dimensional visualization. The presented work contributes to unexplored fields including photo-based VR and true-dimensional visualization, offering immersive system designers a thorough comprehension of the benefits, potential, and type of applications in which these new methods can make the difference. This thesis manuscript and its findings have been partly presented in scientific publications. In particular, five conference papers on Springer and the IEEE symposia, [1], [2], [3], [4], [5], and one journal article in an IEEE periodical [6], have been published

    A philosophical discussion of the implications and limitations of using Virtual Reality Technology (VR) as an “Empathy Machine”

    Get PDF
    This thesis engages in a philosophical discussion on “empathy”, “virtuality”, and the use of virtual reality (VR) technology as an “empathy machine”. Here, I define empathy as the intentional activity (or skill) of recreating aspects of another subject’s emotional experience in one’s imagination to reflectively and “experientially” understand what another is feeling. As opposed to isomorphically appropriating another’s feelings to oneself, I identify empathy as third-personally “feeling with” others. After exploring the narrow and pluralistic approaches to understanding empathy, I argue that there are compelling pragmatic reasons for adopting the pluralistic approach, the proponents of which prefer to highlight varieties of empathy instead of a sole conceptualisation of “empathy proper”. As for virtuality, I subscribe to a third view that can be located between “virtual realism” and “virtual irrealism”, in that I understand virtuality as a sui generis mode of technological actualisation, where psychophysiological illusions, of virtual presence and embodiment, coexist with veridical elements, such as virtual social objects, without causing a defect in users’ rational judgment. My main contention in this research is that VR’s multisensory affordances can be instrumentally utilised as a complementary extension (but never as a replacement) for offsetting some of the limitations in attaining interpersonal empathy through imaginative perspective-taking alone. After discussing this contention in more depth, I then attempt to address some of the recurrent challenges and criticism raised against VR’s use as an empathy machine. Finally, I highlight some of the limitations in VR technology’s capability to capture and transmit a full representation of others’ lived experiences

    WearPut : Designing Dexterous Wearable Input based on the Characteristics of Human Finger Motions

    Get PDF
    Department of Biomedical Engineering (Human Factors Engineering)Powerful microchips for computing and networking allow a wide range of wearable devices to be miniaturized with high fidelity and availability. In particular, the commercially successful smartwatches placed on the wrist drive market growth by sharing the role of smartphones and health management. The emerging Head Mounted Displays (HMDs) for Augmented Reality (AR) and Virtual Reality (VR) also impact various application areas in video games, education, simulation, and productivity tools. However, these powerful wearables have challenges in interaction with the inevitably limited space for input and output due to the specialized form factors for fitting the body parts. To complement the constrained interaction experience, many wearable devices still rely on other large form factor devices (e.g., smartphones or hand-held controllers). Despite their usefulness, the additional devices for interaction can constrain the viability of wearable devices in many usage scenarios by tethering users' hands to the physical devices. This thesis argues that developing novel Human-Computer interaction techniques for the specialized wearable form factors is vital for wearables to be reliable standalone products. This thesis seeks to address the issue of constrained interaction experience with novel interaction techniques by exploring finger motions during input for the specialized form factors of wearable devices. The several characteristics of the finger input motions are promising to enable increases in the expressiveness of input on the physically limited input space of wearable devices. First, the input techniques with fingers are prevalent on many large form factor devices (e.g., touchscreen or physical keyboard) due to fast and accurate performance and high familiarity. Second, many commercial wearable products provide built-in sensors (e.g., touchscreen or hand tracking system) to detect finger motions. This enables the implementation of novel interaction systems without any additional sensors or devices. Third, the specialized form factors of wearable devices can create unique input contexts while the fingers approach their locations, shapes, and components. Finally, the dexterity of fingers with a distinctive appearance, high degrees of freedom, and high sensitivity of joint angle perception have the potential to widen the range of input available with various movement features on the surface and in the air. Accordingly, the general claim of this thesis is that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. This thesis demonstrates the general claim by providing evidence in various wearable scenarios with smartwatches and HMDs. First, this thesis explored the comfort range of static and dynamic touch input with angles on the touchscreen of smartwatches. The results showed the specific comfort ranges on variations in fingers, finger regions, and poses due to the unique input context that the touching hand approaches a small and fixed touchscreen with a limited range of angles. Then, finger region-aware systems that recognize the flat and side of the finger were constructed based on the contact areas on the touchscreen to enhance the expressiveness of angle-based touch input. In the second scenario, this thesis revealed distinctive touch profiles of different fingers caused by the unique input context for the touchscreen of smartwatches. The results led to the implementation of finger identification systems for distinguishing two or three fingers. Two virtual keyboards with 12 and 16 keys showed the feasibility of touch-based finger identification that enables increases in the expressiveness of touch input techniques. In addition, this thesis supports the general claim with a range of wearable scenarios by exploring the finger input motions in the air. In the third scenario, this thesis investigated the motions of in-air finger stroking during unconstrained in-air typing for HMDs. The results of the observation study revealed details of in-air finger motions during fast sequential input, such as strategies, kinematics, correlated movements, inter-fingerstroke relationship, and individual in-air keys. The in-depth analysis led to a practical guideline for developing robust in-air typing systems with finger stroking. Lastly, this thesis examined the viable locations of in-air thumb touch input to the virtual targets above the palm. It was confirmed that fast and accurate sequential thumb touch can be achieved at a total of 8 key locations with the built-in hand tracking system in a commercial HMD. Final typing studies with a novel in-air thumb typing system verified increases in the expressiveness of virtual target selection on HMDs. This thesis argues that the objective and subjective results and novel interaction techniques in various wearable scenarios support the general claim that understanding how users move their fingers during input will enable increases in the expressiveness of the interaction techniques we can create for resource-limited wearable devices. Finally, this thesis concludes with thesis contributions, design considerations, and the scope of future research works, for future researchers and developers to implement robust finger-based interaction systems on various types of wearable devices.ope

    New interactive interface design for STEM museums: a case study in VR immersive technology

    Get PDF
    Novel technologies are used to develop new museum exhibits, aiming to attract visitors’ attention. However, using new technology is not always successful, perhaps because the design of a new exhibit was inappropriate, or users were unfamiliar with interacting with a new device. As a result, choosing alternative technology to create a unique interactive display is critical. The results of using technology best practices enable the designer to help reduce failures. This research uses virtual reality (VR) immersive technology as a case study to explore how to design a new interactive exhibit in science, technology, engineering and mathematics (STEM) museums. VR has seen increased use in Thailand museums, but people are unfamiliar with it, and few use it daily. It had problems with health concerns such as motion sickness, and the virtual reality head-mounted display (VR HMD) restricts social interaction, which is essential for museum visitors. This research focuses on improving how VR is deployed in STEM museums by proposing a framework for designing a new VR exhibit that supports social interaction. The research question is, how do we create a new interactive display using VR immersive technology while supporting visitor social interaction? The investigation uses mixed methods to construct the proposed framework, including a theoretical review, museum observational study, and experimental study. The in-the-wild study and workshop were conducted to evaluate the proposed framework. The suggested framework provides guidelines for designing a new VR exhibit. The component of a framework has two main parts. The first part is considering factors for checking whether VR technology suit for creating a new exhibit. The second part is essential components for designing a new VR exhibit includes Content Design, Action Design, Social Interaction Design, System Design, and Safety and Health. Various kinds of studies were conducted to answer the research question. First, a museum observational study led to an understanding of the characteristics of interactive exhibits in STEM museums, the patterns of social interaction, the range of immersive technology that museums use and the practice of using VR technology in STEM museums. Next, the alternative design for an interactive exhibit study investigates the effect on the user experience of tangible, gesture and VR technologies. It determines the factors that make the user experience different and suggests six aspects to consider when choosing technology. Third, social interaction design in VR for museum study explores methods to connect players; single player, symmetric connection (VR HMD and VR HMD) and asymmetric connection (VR HMD and PC), to provide social interaction while playing the VR exhibit and investigates social features and social mechanics for visitors to communicate and exchange knowledge. It found that the symmetric connection provides better social interaction than others. However, the asymmetric link is also a way for visitors to exchange knowledge. The study recommends using mixed symmetric and asymmetric connections when deploying VR exhibits in a museum. This was confirmed by the in-the-wild research and validated the framework that indicated it helped staff manage the VR exhibit and provided a co-presence and co-player experience. Fourth, the content design of a display in the virtual environment study examines the effect of design content between 2D and 3D on visitors' learning and memory. It showed that content design with 2D and 3D did not influence visitors to gain knowledge and remember the exhibit’s story. However, the 3D view offers more immersion and emotion than the 2D view. The research proposes using 3D when designing content to evoke a player’s emotion; designing content for a VR exhibit should deliver experience rather than text-based learning. Furthermore, the feedback on the qualitative results of each study provided insight into the design user experience. Evaluation of the proposed framework is the last part of this research. A study in the wild was conducted to validate the proposed framework in museums. Two VR exhibits were adjusted with features that matched the proposed framework’s suggested components and were deployed in the museum to gather visitors' feedback. It received positive feedback from the visitors, and visitors approved of using VR technology in the museum. The results of user feedback from a workshop to evaluate the helpfulness of the framework showed that the framework's components are appropriate, and the framework is practical when designing a new VR exhibit, particularly for people unfamiliar with VR technology. In addition, the proposed framework of this research may be applied to study emerging technology to create a novel exhibit

    HAPTIC FEELING: GENEALOGIE TRA STORIA DELL’ARTE, CRITICA E NEW-MEDIA

    Get PDF
    This doctoral project aims to offer a genealogical investigation of haptic perception in art history. Although haptic perception has been the subject of an extraordinarily articulated and interdisciplinary panorama of studies from the last decade of the 19th century to the new millennium, it has yet to receive systematic recognition in art history and historiography.It was intersecting these directions, the four sections that make up the thesis aim to create a cohesive and harmonious treatment. The motility of the hand is the protagonist of the first section entitled Prehistory of the haptic. From iconography to morphogenesis, which, starting from a philological deconstruction of Jeff Koons' Balloon Venus series (2008-2021) and its relationship with the haptic feeling "of positivity", aims to formulate a hypothesis of a reinterpretation of modernist plastic hinged on this prehistoric artefact and on some of its figures, tracing a submerged historiography that could support this critical proposal. he second section of the project, entitled The History of the Haptic - State of the Literature and Critical Reconnaissance initiates an itinerary into the 20th-century development of the notion of the Haptic in the art-historical sphere through a critical re-reading of a panorama of sources, approaches, and orientations. In posing the haptic as a methodological question, the section moves from a critical reconnaissance of Alois Riegl’s role and an examination of the figures elaborated in the historiographic field to analyses the relations between haptics and the historical avant-garde. Following this, the third section of the work, defined as An Alternative History of Haptics, hypothesizes a 'prehistoric' and laboratory genealogy leading back to the 1892 rehabilitation of the term 'haptic/haptics' in the psycho-aesthetic sphere, tracing a network of relations between the scientific side (Wundt, Dessoir, Titchener, Münsterberg, James) and the artistic side (Berenson and Stein). Such an interweaving gives rise to the fourth section of the present work, entitled In the histories of the haptic, and aims to interrogate some selected nodes by which, in the second half of the 20th century, understood as a conceptual figure and interdisciplinary dynamic, a web of case studies has emerged whose analysis can assist the study of haptic feeling in an art-historical perspective and which, from the New York of 1916 up to the 1990s, allows us to highlight how it has grafted itself onto, actively participating in, stylistic issues and the intermediate development of sculpture. Finally, in the third section, entitled “In the sanctuary cave: haptic feeling and new media”, conceived as an appendix, the discussion interrogates the multimodal disposition of the haptic when sculpture takes over complex environmental and intermedial organisms. It again interrogates the paleo-historical framework about the contemporary mediasphere and the modes of evocation of touch widely probed by artists such as Laure Prouvost, Camille Henrot, Julien Prévieux and Marguerite Humeau

    Real-time motion capture and game engine technologies in contemporary dance

    Get PDF
    This Master of Arts thesis is made for the New Media study programme in the School of Arts, Design and Architecture at Aalto University under the supervision of Matti Niinimäki and with advising from Nuno Antonio Do Nascimento Correia and Teemu Määttänen. This study is focusing on the topic of real-time motion capture and game engine technologies in contemporary dance with the goal to discover how these technologies can augment contemporary dance, both in the visual and audio domains, in a way in which sound, visuals, and choreography influence one another. The methods being used to achieve this goal include devising mixed reality audiovisual dance performance, as a part of practice-based research methodology, related work review as well as an interview with a field expert. Although the topic of motion capture in contemporary dance is fairly well-researched there is a clear shortage of studies on the ways game engines could be utilized in this segment of art and even less studies are conducted on modern hybrid club music and its influence on contemporary dance. Current research fills these gaps. This study includes a brief overview of Dance and Technology art movement, elucidates motion capture and game engine technologies as well as attempts to define modern hybrid club music. It covers a broad selection of case studies from contemporary dance segment related to each category as well as the writer’s own perspective and experience with motion capture and modern hybrid club music. Furthermore, this research includes an interview with pioneering virtual performer, Sam Rolfes, who is actively using real-time motion capture, game engines, and other real-time tools in his artistic practice and finally, it explains in great detail the whole design process behind the mixed real-ity audio-visual dance performance piece "ROCK/STAR Vol.1", an artistic component of this research. Using various game engine technologies together with the real-time motion captured data can help to establish a greater connection between different artistic domains of the performance, as well as provide a much stronger feeling of a world and a story for the performer who is wearing a suit. The ability to execute things in real-time, that this tech is offering makes it possible for performers to respond to one another, as well as the audience and the current moment in time, thus embracing and crystallizing the originality and specificity of the moment.Media files notes: Fragment of "ROCK/STAR Vol.1" artistic component of this research Description: Video recording of premiere of mixed reality audiovisual dance performance "ROCK/STAR Vol.1" that took place on 20th of May 2023 in Odeion Screening Auditorium in Otaniemi.(Fragment) Media rights: CC-BY-NC-ND 4.
    corecore