47 research outputs found

    Balancing User Experience for Mobile One-to-One Interpersonal Telepresence

    Get PDF
    The COVID-19 virus disrupted all aspects of our daily lives, and though the world is finally returning to normalcy, the pandemic has shown us how ill-prepared we are to support social interactions when expected to remain socially distant. Family members missed major life events of their loved ones; face-to-face interactions were replaced with video chat; and the technologies used to facilitate interim social interactions caused an increase in depression, stress, and burn-out. It is clear that we need better solutions to address these issues, and one avenue showing promise is that of Interpersonal Telepresence. Interpersonal Telepresence is an interaction paradigm in which two people can share mobile experiences and feel as if they are together, even though geographically distributed. In this dissertation, we posit that this paradigm has significant value in one-to-one, asymmetrical contexts, where one user can live-stream their experiences to another who remains at home. We discuss a review of the recent Interpersonal Telepresence literature, highlighting research trends and opportunities that require further examination. Specifically, we show how current telepresence prototypes do not meet the social needs of the streamer, who often feels socially awkward when using obtrusive devices. To combat this negative finding, we present a qualitative co-design study in which end users worked together to design their ideal telepresence systems, overcoming value tensions that naturally arise between Viewer and Streamer. Expectedly, virtual reality techniques are desired to provide immersive views of the remote location; however, our participants noted that the devices to facilitate this interaction need to be hidden from the public eye. This suggests that 360∘^\circ cameras should be used, but the lenses need to be embedded in wearable systems, which might affect the viewing experience. We thus present two quantitative studies in which we examine the effects of camera placement and height on the viewing experience, in an effort to understand how we can better design telepresence systems. We found that camera height is not a significant factor, meaning wearable cameras do not need to be positioned at the natural eye-level of the viewer; the streamer is able to place them according to their own needs. Lastly, we present a qualitative study in which we deploy a custom interpersonal telepresence prototype on the co-design findings. Our participants preferred our prototype instead of simple video chat, even though it caused a somewhat increased sense of self-consciousness. Our participants indicated that they have their own preferences, even with simple design decisions such as style of hat, and we as a community need to consider ways to allow customization within our devices. Overall, our work contributes new knowledge to the telepresence field and helps system designers focus on the features that truly matter to users, in an effort to let people have richer experiences and virtually bridge the distance to their loved ones

    Presence 2005: the eighth annual international workshop on presence, 21-23 September, 2005 University College London (Conference proceedings)

    Get PDF
    OVERVIEW (taken from the CALL FOR PAPERS) Academics and practitioners with an interest in the concept of (tele)presence are invited to submit their work for presentation at PRESENCE 2005 at University College London in London, England, September 21-23, 2005. The eighth in a series of highly successful international workshops, PRESENCE 2005 will provide an open discussion forum to share ideas regarding concepts and theories, measurement techniques, technology, and applications related to presence, the psychological state or subjective perception in which a person fails to accurately and completely acknowledge the role of technology in an experience, including the sense of 'being there' experienced by users of advanced media such as virtual reality. The concept of presence in virtual environments has been around for at least 15 years, and the earlier idea of telepresence at least since Minsky's seminal paper in 1980. Recently there has been a burst of funded research activity in this area for the first time with the European FET Presence Research initiative. What do we really know about presence and its determinants? How can presence be successfully delivered with today's technology? This conference invites papers that are based on empirical results from studies of presence and related issues and/or which contribute to the technology for the delivery of presence. Papers that make substantial advances in theoretical understanding of presence are also welcome. The interest is not solely in virtual environments but in mixed reality environments. Submissions will be reviewed more rigorously than in previous conferences. High quality papers are therefore sought which make substantial contributions to the field. Approximately 20 papers will be selected for two successive special issues for the journal Presence: Teleoperators and Virtual Environments. PRESENCE 2005 takes place in London and is hosted by University College London. The conference is organized by ISPR, the International Society for Presence Research and is supported by the European Commission's FET Presence Research Initiative through the Presencia and IST OMNIPRES projects and by University College London

    Dead men's eyes: embodied GIS, mixed reality and landscape archaeology

    Get PDF
    Archaeology has been at the forefront of attempts to use Geographic Information Systems (GIS) to address the challenges of exploring and recreating perception and social behaviour within a computer environment. However, these approaches have traditionally been based on the visual aspect of perception, and analysis has usually been confined to the computer laboratory. In contrast, phenomenological analyses of archaeological landscapes are normally carried out within the landscape itself, computer analysis away from the landscape in question is often seen as anathema to such approaches. This thesis attempts to bridge this gap by using a Mixed Reality (MR) approach. MR provides an opportunity to merge the real world with virtual elements of relevance to the past, including 3D models, soundscapes and immersive data. In this way, the results of sophisticated desk-based GIS analyses can be experienced directly within the field and combined with phenomenological analysis to create an embodied GIS. The thesis explores the potential of this methodology by applying it in the Bronze Age landscape of Leskernick Hill, Bodmin Moor, UK. Since Leskernick Hill has (famously) already been the subject of intensive phenomenological investigation, it is possible to compare the insights gained from 'traditional' landscape phenomenology with those obtained from the use of Mixed Reality, and effectively combine quantitative GIS analysis and phenomenological fieldwork into one embodied experience. This mixing of approaches leads to the production of a new innovative method which not only provides new interpretations of the settlement on Leskernick Hill but also suggests avenues for the future of archaeological landscape research more generally

    Integrating passive ubiquitous surfaces into human-computer interaction

    Get PDF
    Mobile technologies enable people to interact with computers ubiquitously. This dissertation investigates how ordinary, ubiquitous surfaces can be integrated into human-computer interaction to extend the interaction space beyond the edge of the display. It turns out that acoustic and tactile features generated during an interaction can be combined to identify input events, the user, and the surface. In addition, it is shown that a heterogeneous distribution of different surfaces is particularly suitable for realizing versatile interaction modalities. However, privacy concerns must be considered when selecting sensors, and context can be crucial in determining whether and what interaction to perform.Mobile Technologien ermöglichen den Menschen eine allgegenwĂ€rtige Interaktion mit Computern. Diese Dissertation untersucht, wie gewöhnliche, allgegenwĂ€rtige OberflĂ€chen in die Mensch-Computer-Interaktion integriert werden können, um den Interaktionsraum ĂŒber den Rand des Displays hinaus zu erweitern. Es stellt sich heraus, dass akustische und taktile Merkmale, die wĂ€hrend einer Interaktion erzeugt werden, kombiniert werden können, um Eingabeereignisse, den Benutzer und die OberflĂ€che zu identifizieren. DarĂŒber hinaus wird gezeigt, dass eine heterogene Verteilung verschiedener OberflĂ€chen besonders geeignet ist, um vielfĂ€ltige InteraktionsmodalitĂ€ten zu realisieren. Bei der Auswahl der Sensoren mĂŒssen jedoch Datenschutzaspekte berĂŒcksichtigt werden, und der Kontext kann entscheidend dafĂŒr sein, ob und welche Interaktion durchgefĂŒhrt werden soll

    Investigation of police decision making using combined EEG and virtual reality methods

    Get PDF
    Police officers in the UK are granted additional powers to allow them to protect life and property crime. Of these powers, the sanction to use stopping, potentially lethal, force given to Authorised Firearms Officers (AFO) is arguably the most salient. Each decision made by an AFO to discharge their firearm or not has great impact and so it is important we research the cognitive processes that lead to such a decision.One challenge of researching these cognitive processes is eliciting ecologically valid behaviour while maintaining internal validity. We approached this challenge by developing combined electroencephalography (EEG) and virtual reality research methods. Using these methods, we produced scenarios that reflected features of AFO training. First, we tested simple versions of the scenarios on a novice population. Following this, we increased the complexity of the scenarios and collected data from both AFOs and novices.We found that participants were fastest when responding to threatening scenarios. Further, AFOs had consistently faster response times than novices. In line with similar ‘Go/No-Go’ paradigms, we found greater increases in pre-response frontal-midline theta when participants did not shoot versus when they did. Comparisons of EEG between AFOs and novices revealed greater pre-response increases in frontal-midline theta and central delta when they equipped a firearm. Greater differences in delta activity were also observed between different levels of threat in the AFO group.Together, these findings suggest that differences in performance between experts and novices may be due to their greater attention towards threat. Further investigation of expert decision making should build on our use of naturalistic stimuli and expert participants to ensure that findings are ecologically valid.With increasing accessibility of modern game engines and virtual reality technology, this approach will be beneficial to researchers in many fields where ecological validity is required

    FACING EXPERIENCE: A PAINTER’S CANVAS IN VIRTUAL REALITY

    Get PDF
    Full version unavailable due to 3rd party copyright restrictions.This research investigates how shifts in perception might be brought about through the development of visual imagery created by the use of virtual environment technology. Through a discussion of historical uses of immersion in art, this thesis will explore how immersion functions and why immersion has been a goal for artists throughout history. It begins with a discussion of ancient cave drawings and the relevance of Plato’s Allegory of the Cave. Next it examines the biological origins of “making special.” The research will discuss how this concept, combined with the ideas of “action” and “reaction,” has reinforced the view that art is fundamentally experiential rather than static. The research emphasizes how present-day virtual environment art, in providing a space that engages visitors in computer graphics, expands on previous immersive artistic practices. The thesis examines the technical context in which the research occurs by briefly describing the use of computer science technologies, the fundamentals of visual arts practices, and the importance of aesthetics in new media and provides a description of my artistic practice. The aim is to investigate how combining these approaches can enhance virtual environments as artworks. The computer science of virtual environments includes both hardware and software programming. The resultant virtual environment experiences are technologically dependent on the types of visual displays being used, including screens and monitors, and their subsequent viewing affordances. Virtual environments fill the field of view and can be experienced with a head mounted display (HMD) or a large screen display. The sense of immersion gained through the experience depends on how tracking devices and related peripheral devices are used to facilitate interaction. The thesis discusses visual arts practices with a focus on how illusions shift our cognition and perception in the visual modalities. This discussion includes how perceptual thinking is the foundation of art experiences, how analogies are the foundation of cognitive experiences and how the two intertwine in art experiences for virtual environments. An examination of the aesthetic strategies used by artists and new media critics are presented to discuss new media art. This thesis investigates the visual elements used in virtual environments and prescribes strategies for creating art for virtual environments. Methods constituting a unique virtual environment practice that focuses on visual analogies are discussed. The artistic practice that is discussed as the basis for this research also concentrates on experiential moments and shifts in perception and cognition and references Douglas Hofstadter, Rudolf Arnheim and John Dewey. iv Virtual environments provide for experiences in which the imagery generated updates in real time. Following an analysis of existing artwork and critical writing relative to the field, the process of inquiry has required the creation of artworks that involve tracking systems, projection displays, sound work, and an understanding of the importance of the visitor. In practice, the research has shown that the visitor should be seen as an interlocutor, interacting from a first-person perspective with virtual environment events, where avatars or other instrumental intermediaries, such as guns, vehicles, or menu systems, do not to occlude the view. The aesthetic outcomes of this research are the result of combining visual analogies, real time interactive animation, and operatic performance in immersive space. The environments designed in this research were informed initially by paintings created with imagery generated in a hypnopompic state or during the moments of transitioning from sleeping to waking. The drawings often emphasize emotional moments as caricatures and/or elements of the face as seen from a number of perspectives simultaneously, in the way of some cartoons, primitive artwork or Cubist imagery. In the imagery, the faces indicate situations, emotions and confrontations which can offer moments of humour and reflective exploration. At times, the faces usurp the space and stand in representation as both face and figure. The power of the placement of the caricatures in the paintings become apparent as the imagery stages the expressive moment. The placement of faces sets the scene, establishes relationships and promotes the honesty and emotions that develop over time as the paintings are scrutinized. The development process of creating virtual environment imagery starts with hand drawn sketches of characters, develops further as paintings on “digital canvas”, are built as animated, three-dimensional models and finally incorporated into a virtual environment. The imagery is generated while drawing, typically with paper and pencil, in a stream of consciousness during the hypnopompic state. This method became an aesthetic strategy for producing a snappy straightforward sketch. The sketches are explored further as they are worked up as paintings. During the painting process, the figures become fleshed out and their placement on the page, in essence brings them to life. These characters inhabit a world that I explore even further by building them into three dimensional models and placing them in computer generated virtual environments. The methodology of developing and placing the faces/figures became an operational strategy for building virtual environments. In order to open up the range of art virtual environments, and develop operational strategies for visitors’ experience, the characters and their facial features are used as navigational strategies, signposts and methods of wayfinding in order to sustain a stream of consciousness type of navigation. Faces and characters were designed to represent those intimate moments of self-reflection and confrontation that occur daily within ourselves and with others. They sought to reflect moments of wonderment, hurt, curiosity and humour that could subsequently be relinquished for more practical or purposeful endeavours. They were intended to create conditions in which visitors might reflect upon their emotional state, v enabling their understanding and trust of their personal space, in which decisions are made and the nature of world is determined. In order to extend the split-second, frozen moment of recognition that a painting affords, the caricatures and their scenes are given new dimensions as they become characters in a performative virtual reality. Emotables, distinct from avatars, are characters confronting visitors in the virtual environment to engage them in an interactive, stream of consciousness, non-linear dialogue. Visitors are also situated with a role in a virtual world, where they were required to adapt to the language of the environment in order to progress through the dynamics of a drama. The research showed that imagery created in a context of whimsy and fantasy could bring ontological meaning and aesthetic experience into the interactive environment, such that emotables or facially expressive computer graphic characters could be seen as another brushstroke in painting a world of virtual reality

    Leveraging eXtented Reality & Human-Computer Interaction for User Experi- ence in 360◩ Video

    Get PDF
    EXtended Reality systems have resurged as a medium for work and entertainment. While 360o video has been characterized as less immersive than computer-generated VR, its realism, ease of use and affordability mean it is in widespread commercial use. Based on the prevalence and potential of the 360o video format, this research is focused on improving and augmenting the user experience of watching 360o video. By leveraging knowledge from Extented Reality (XR) systems and Human-Computer Interaction (HCI), this research addresses two issues affecting user experience in 360o video: Attention Guidance and Visually Induced Motion Sickness (VIMS). This research work relies on the construction of multiple artifacts to answer the de- fined research questions: (1) IVRUX, a tool for analysis of immersive VR narrative expe- riences; (2) Cue Control, a tool for creation of spatial audio soundtracks for 360o video, as well as enabling the collection and analysis of captured metrics emerging from the user experience; and (3) VIMS mitigation pipeline, a linear sequence of modules (including optical flow and visual SLAM among others) that control parameters for visual modi- fications such as a restricted Field of View (FoV). These artifacts are accompanied by evaluation studies targeting the defined research questions. Through Cue Control, this research shows that non-diegetic music can be spatialized to act as orientation for users. A partial spatialization of music was deemed ineffective when used for orientation. Addi- tionally, our results also demonstrate that diegetic sounds are used for notification rather than orientation. Through VIMS mitigation pipeline, this research shows that dynamic restricted FoV is statistically significant in mitigating VIMS, while mantaining desired levels of Presence. Both Cue Control and the VIMS mitigation pipeline emerged from a Research through Design (RtD) approach, where the IVRUX artifact is the product of de- sign knowledge and gave direction to research. The research presented in this thesis is of interest to practitioners and researchers working on 360o video and helps delineate future directions in making 360o video a rich design space for interaction and narrative.Sistemas de Realidade EXtendida ressurgiram como um meio de comunicação para o tra- balho e entretenimento. Enquanto que o vĂ­deo 360o tem sido caracterizado como sendo menos imersivo que a Realidade Virtual gerada por computador, o seu realismo, facili- dade de uso e acessibilidade significa que tem uso comercial generalizado. Baseado na prevalĂȘncia e potencial do formato de vĂ­deo 360o, esta pesquisa estĂĄ focada em melhorar e aumentar a experiĂȘncia de utilizador ao ver vĂ­deos 360o. Impulsionado por conhecimento de sistemas de Realidade eXtendida (XR) e Interacção Humano-Computador (HCI), esta pesquisa aborda dois problemas que afetam a experiĂȘncia de utilizador em vĂ­deo 360o: Orientação de Atenção e Enjoo de Movimento Induzido Visualmente (VIMS). Este trabalho de pesquisa Ă© apoiado na construção de mĂșltiplos artefactos para res- ponder as perguntas de pesquisa definidas: (1) IVRUX, uma ferramenta para anĂĄlise de experiĂȘncias narrativas imersivas em VR; (2) Cue Control, uma ferramenta para a criação de bandas sonoras de ĂĄudio espacial, enquanto permite a recolha e anĂĄlise de mĂ©tricas capturadas emergentes da experiencia de utilizador; e (3) canal para a mitigação de VIMS, uma sequĂȘncia linear de mĂłdulos (incluindo fluxo Ăłtico e SLAM visual entre outros) que controla parĂąmetros para modificaçÔes visuais como o campo de visĂŁo restringido. Estes artefactos estĂŁo acompanhados por estudos de avaliação direcionados para Ă s perguntas de pesquisa definidas. AtravĂ©s do Cue Control, esta pesquisa mostra que mĂșsica nĂŁo- diegĂ©tica pode ser espacializada para servir como orientação para os utilizadores. Uma espacialização parcial da mĂșsica foi considerada ineficaz quando usada para a orientação. Adicionalmente, os nossos resultados demonstram que sons diegĂ©ticos sĂŁo usados para notificação em vez de orientação. AtravĂ©s do canal para a mitigação de VIMS, esta pesquisa mostra que o campo de visĂŁo restrito e dinĂąmico Ă© estatisticamente significante ao mitigar VIMS, enquanto mantem nĂ­veis desejados de Presença. Ambos Cue Control e o canal para a mitigação de VIMS emergiram de uma abordagem de Pesquisa atravĂ©s do Design (RtD), onde o artefacto IVRUX Ă© o produto de conhecimento de design e deu direcção Ă  pesquisa. A pesquisa apresentada nesta tese Ă© de interesse para profissionais e investigadores tra- balhando em vĂ­deo 360o e ajuda a delinear futuras direçÔes em tornar o vĂ­deo 360o um espaço de design rico para a interação e narrativa

    Freeform 3D interactions in everyday environments

    Get PDF
    PhD ThesisPersonal computing is continuously moving away from traditional input using mouse and keyboard, as new input technologies emerge. Recently, natural user interfaces (NUI) have led to interactive systems that are inspired by our physical interactions in the real-world, and focus on enabling dexterous freehand input in 2D or 3D. Another recent trend is Augmented Reality (AR), which follows a similar goal to further reduce the gap between the real and the virtual, but predominately focuses on output, by overlaying virtual information onto a tracked real-world 3D scene. Whilst AR and NUI technologies have been developed for both immersive 3D output as well as seamless 3D input, these have mostly been looked at separately. NUI focuses on sensing the user and enabling new forms of input; AR traditionally focuses on capturing the environment around us and enabling new forms of output that are registered to the real world. The output of NUI systems is mainly presented on a 2D display, while the input technologies for AR experiences, such as data gloves and body-worn motion trackers are often uncomfortable and restricting when interacting in the real world. NUI and AR can be seen as very complimentary, and bringing these two fields together can lead to new user experiences that radically change the way we interact with our everyday environments. The aim of this thesis is to enable real-time, low latency, dexterous input and immersive output without heavily instrumenting the user. The main challenge is to retain and to meaningfully combine the positive qualities that are attributed to both NUI and AR systems. I review work in the intersecting research fields of AR and NUI, and explore freehand 3D interactions with varying degrees of expressiveness, directness and mobility in various physical settings. There a number of technical challenges that arise when designing a mixed NUI/AR system, which I will address is this work: What can we capture, and how? How do we represent the real in the virtual? And how do we physically couple input and output? This is achieved by designing new systems, algorithms, and user experiences that explore the combination of AR and NUI

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion
    corecore