842 research outputs found

    Leveraging eXtented Reality & Human-Computer Interaction for User Experi- ence in 360◩ Video

    Get PDF
    EXtended Reality systems have resurged as a medium for work and entertainment. While 360o video has been characterized as less immersive than computer-generated VR, its realism, ease of use and affordability mean it is in widespread commercial use. Based on the prevalence and potential of the 360o video format, this research is focused on improving and augmenting the user experience of watching 360o video. By leveraging knowledge from Extented Reality (XR) systems and Human-Computer Interaction (HCI), this research addresses two issues affecting user experience in 360o video: Attention Guidance and Visually Induced Motion Sickness (VIMS). This research work relies on the construction of multiple artifacts to answer the de- fined research questions: (1) IVRUX, a tool for analysis of immersive VR narrative expe- riences; (2) Cue Control, a tool for creation of spatial audio soundtracks for 360o video, as well as enabling the collection and analysis of captured metrics emerging from the user experience; and (3) VIMS mitigation pipeline, a linear sequence of modules (including optical flow and visual SLAM among others) that control parameters for visual modi- fications such as a restricted Field of View (FoV). These artifacts are accompanied by evaluation studies targeting the defined research questions. Through Cue Control, this research shows that non-diegetic music can be spatialized to act as orientation for users. A partial spatialization of music was deemed ineffective when used for orientation. Addi- tionally, our results also demonstrate that diegetic sounds are used for notification rather than orientation. Through VIMS mitigation pipeline, this research shows that dynamic restricted FoV is statistically significant in mitigating VIMS, while mantaining desired levels of Presence. Both Cue Control and the VIMS mitigation pipeline emerged from a Research through Design (RtD) approach, where the IVRUX artifact is the product of de- sign knowledge and gave direction to research. The research presented in this thesis is of interest to practitioners and researchers working on 360o video and helps delineate future directions in making 360o video a rich design space for interaction and narrative.Sistemas de Realidade EXtendida ressurgiram como um meio de comunicação para o tra- balho e entretenimento. Enquanto que o vĂ­deo 360o tem sido caracterizado como sendo menos imersivo que a Realidade Virtual gerada por computador, o seu realismo, facili- dade de uso e acessibilidade significa que tem uso comercial generalizado. Baseado na prevalĂȘncia e potencial do formato de vĂ­deo 360o, esta pesquisa estĂĄ focada em melhorar e aumentar a experiĂȘncia de utilizador ao ver vĂ­deos 360o. Impulsionado por conhecimento de sistemas de Realidade eXtendida (XR) e Interacção Humano-Computador (HCI), esta pesquisa aborda dois problemas que afetam a experiĂȘncia de utilizador em vĂ­deo 360o: Orientação de Atenção e Enjoo de Movimento Induzido Visualmente (VIMS). Este trabalho de pesquisa Ă© apoiado na construção de mĂșltiplos artefactos para res- ponder as perguntas de pesquisa definidas: (1) IVRUX, uma ferramenta para anĂĄlise de experiĂȘncias narrativas imersivas em VR; (2) Cue Control, uma ferramenta para a criação de bandas sonoras de ĂĄudio espacial, enquanto permite a recolha e anĂĄlise de mĂ©tricas capturadas emergentes da experiencia de utilizador; e (3) canal para a mitigação de VIMS, uma sequĂȘncia linear de mĂłdulos (incluindo fluxo Ăłtico e SLAM visual entre outros) que controla parĂąmetros para modificaçÔes visuais como o campo de visĂŁo restringido. Estes artefactos estĂŁo acompanhados por estudos de avaliação direcionados para Ă s perguntas de pesquisa definidas. AtravĂ©s do Cue Control, esta pesquisa mostra que mĂșsica nĂŁo- diegĂ©tica pode ser espacializada para servir como orientação para os utilizadores. Uma espacialização parcial da mĂșsica foi considerada ineficaz quando usada para a orientação. Adicionalmente, os nossos resultados demonstram que sons diegĂ©ticos sĂŁo usados para notificação em vez de orientação. AtravĂ©s do canal para a mitigação de VIMS, esta pesquisa mostra que o campo de visĂŁo restrito e dinĂąmico Ă© estatisticamente significante ao mitigar VIMS, enquanto mantem nĂ­veis desejados de Presença. Ambos Cue Control e o canal para a mitigação de VIMS emergiram de uma abordagem de Pesquisa atravĂ©s do Design (RtD), onde o artefacto IVRUX Ă© o produto de conhecimento de design e deu direcção Ă  pesquisa. A pesquisa apresentada nesta tese Ă© de interesse para profissionais e investigadores tra- balhando em vĂ­deo 360o e ajuda a delinear futuras direçÔes em tornar o vĂ­deo 360o um espaço de design rico para a interação e narrativa

    MIRACLE Handbook : Guidelines for Mixed Reality Applications for Culture and Learning Experiences

    Get PDF
    Siirretty Doriast

    Supporting Real-Time Contextual Inquiry Through Sensor Data

    Get PDF
    A key challenge in carrying out product design research is obtaining rich contextual information about use in the wild. We present a method that algorithmically mediates between participants, researchers, and objects in order to enable real-time collaborative sensemaking. It facilitates contextual inquiry, revealing behaviours and motivations that frame product use in the wild. In particular, we are interested in developing a practice of use driven design, where products become research tools that generate design insights grounded in user experiences. The value of this method was explored through the deployment of a collection of Bluetooth speakers that capture and stream live data to remote but co-present researchers about their movement and operation. Researchers monitored a visualisation of the real-time data to build up a picture of how the speakers were being used, responding to moments of activity within the data, initiating text conversations and prompting participants to capture photos and video. Based on the findings of this explorative study, we discuss the value of this method, how it compares to contemporary research practices, and the potential of machine learning to scale it up for use within industrial contexts. As greater agency is given to both objects and algorithms, we explore ways to empower ethnographers and participants to actively collaborate within remote real-time research

    AN ENACTIVE APPROACH TO TECHNOLOGICALLY MEDIATED LEARNING THROUGH PLAY

    Get PDF
    This thesis investigated the application of enactive principles to the design of classroom technolo- gies for young children’s learning through play. This study identified the attributes of an enactive pedagogy, in order to develop a design framework to accommodate enactive learning processes. From an enactive perspective, the learner is defined as an autonomous agent, capable of adapta- tion via the recursive consumption of self generated meaning within the constraints of a social and material world. Adaptation is the parallel development of mind and body that occurs through inter- action, which renders knowledge contingent on the environment from which it emerged. Parallel development means that action and perception in learning are as critical as thinking. An enactive approach to design therefore aspires to make the physical and social interaction with technology meaningful to the learning objective, rather than an aside to cognitive tasks. The design framework considered in detail the necessary affordances in terms of interaction, activity and context. In a further interpretation of enactive principles, this thesis recognised play and pretence as vehicles for designing and evaluating enactive learning and the embodied use of technology. In answering the research question, the interpreted framework was applied as a novel approach to designing and analysing children’s engagement with technology for learning, and worked towards a paradigm where interaction is part of the learning experience. The aspiration for the framework was to inform the design of interaction modalities to allow users’ to exercise the inherent mechanisms they have for making sense of the world. However, before making the claim to support enactive learning processes, there was a question as to whether technologically mediated realities were suitable environments to apply this framework. Given the emphasis on the physical world and action, it was the intention of the research and design activities to explore whether digital artefacts and spaces were an impoverished reality for enactive learning; or if digital objects and spaces could afford sufficient ’reality’ to be referents in social play behaviours. The project embedded in this research was tasked with creating deployable technologies that could be used in the classroom. Consequently, this framework was applied in practice, whereby the design practice and deployed technologies served as pragmatic tools to investigate the potential for interactive technologies in children’s physical, social and cognitive learning. To understand the context, underpin the design framework, and evaluate the impact of any techno- logical interventions in school life, the design practice was informed by ethnographic methodologies. The design process responded to cascading findings from phased research activities. The initial fieldwork located meaning making activities within the classroom, with a view to to re-appropriating situated and familiar practices. In the next stage of the design practice, this formative analysis determined the objectives of the participatory sessions, which in turn contributed to the creation of technologies suitable for an inquiry of enactive learning. The final technologies used standard school equipment with bespoke software, enabling children to engage with real time compositing and tracking applications installed in the classrooms’ role play spaces. The evaluation of the play space technologies in the wild revealed under certain conditions, there was evidence of embodied presence in the children’s social, physical and affective behaviour - illustrating how mediated realities can extend physical spaces. These findings suggest that the attention to meaningful interaction, a presence in the environment as a result of an active role, and a social presence - as outlined in the design framework - can lead to the emergence of observable enactive learning processes. As the design framework was applied, these principles could be examined and revised. Two notable examples of revisions to the design framework, in light of the applied practice, related to: (1) a key affordance for meaningful action to emerge required opportunities for direct and immediate engagement; and (2) a situated awareness of the self and other inhabitants in the mediated space required support across the spectrum of social interaction. The application of the design framework enabled this investigation to move beyond a theoretical discourse

    Situated Analytics for Data Scientists

    Get PDF
    Much of Mark Weiser's vision of ``ubiquitous computing'' has come to fruition: We live in a world of interfaces that connect us with systems, devices, and people wherever we are. However, those of us in jobs that involve analyzing data and developing software find ourselves tied to environments that limit when and where we may conduct our work; it is ungainly and awkward to pull out a laptop during a stroll through a park, for example, but difficult to write a program on one's phone. In this dissertation, I discuss the current state of data visualization in data science and analysis workflows, the emerging domains of immersive and situated analytics, and how immersive and situated implementations and visualization techniques can be used to support data science. I will then describe the results of several years of my own empirical work with data scientists and other analytical professionals, particularly (though not exclusively) those employed with the U.S. Department of Commerce. These results, as they relate to visualization and visual analytics design based on user task performance, observations by the researcher and participants, and evaluation of observational data collected during user sessions, represent the first thread of research I will discuss in this dissertation. I will demonstrate how they might act as the guiding basis for my implementation of immersive and situated analytics systems and techniques. As a data scientist and economist myself, I am naturally inclined to want to use high-frequency observational data to the end of realizing a research goal; indeed, a large part of my research contributions---and a second ``thread'' of research to be presented in this dissertation---have been around interpreting user behavior using real-time data collected during user sessions. I argue that the relationship between immersive analytics and data science can and should be reciprocal: While immersive implementations can support data science work, methods borrowed from data science are particularly well-suited for supporting the evaluation of the embodied interactions common in immersive and situated environments. I make this argument based on both the ease and importance of collecting spatial data from user sessions from the sensors required for immersive systems to function that I have experienced during the course of my own empirical work with data scientists. As part of this thread of research working from this perspective, this dissertation will introduce a framework for interpreting user session data that I evaluate with user experience researchers working in the tech industry. Finally, this dissertation will present a synthesis of these two threads of research. I combine the design guidelines I derive from my empirical work with machine learning and signal processing techniques to interpret user behavior in real time in Wizualization, a mid-air gesture and speech-based augmented reality visual analytics system

    Gesture Object Interfaces to enable a world of multiple projections

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.Cataloged from PDF version of thesis.Includes bibliographical references (p. [209]-226).Tangible Media as an area has not explored how the tangible handle is more than a marker or place-holder for digital data. Tangible Media can do more. It has the power to materialize and redefine our conception of space and content during the creative process. It can vary from an abstract token that represents a movie to an anthropomorphic plush that reflects the behavior of a sibling during play. My work begins by extending tangible concepts of representation and token-based interactions into movie editing and play scenarios. Through several design iterations and research studies, I establish tangible technologies to drive visual and oral perspectives along with finalized creative works, all during a child's play and exploration. I define the framework, Gesture Object Interfaces, expanding on the fields of Tangible User Interaction and Gesture Recognition. Gesture is a mechanism that can reinforce or create the anthropomorphism of an object. It can give the object life. A Gesture Object is an object in hand while doing anthropomorphized gestures. Gesture Object Interfaces engender new visual and narrative perspectives as part of automatic film assembly during children's play. I generated a suite of automatic film assembly tools accessible to diverse users. The tools that I designed allow for capture, editing and performing to be completely indistinguishable from one another. Gestures integrated with objects become a coherent interface on top of natural play. I built a distributed, modular camera environment and gesture interaction to control that environment. The goal of these new technologies is to motivate children to take new visual and narrative perspectives. In this dissertation I present four tangible platforms that I created as alternatives to the usual fragmented and sequential capturing, editing and performing of narratives available to users of current storytelling tools. I developed Play it by Eye, Frame it by hand, a new generation of narrative tools that shift the frame of reference from the eye to the hand, from the viewpoint (where the eye is) to the standpoint (where the hand is). In Play it by Eye, Frame it by Hand environments, children discover atypical perspectives through the lens of everyday objects. When using Picture This!, children imagine how an object would appear relative to the viewpoint of the toy. They iterate between trying and correcting in a world of multiple perspectives. The results are entirely new genres of child-created films, where children finally capture the cherished visual idioms of action and drama. I report my design process over the course of four tangible research projects that I evaluate during qualitative observations with over one hundred 4- to 14-year-old users. Based on these research findings, I propose a class of moviemaking tools that transform the way users interpret the world visually, and through storytelling.by Catherine Nicole Vaucelle.Ph.D

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities
    • 

    corecore