104 research outputs found

    Roomalive: Magical experiences enabled by scalable, adaptive projector-camera units

    Get PDF
    ABSTRACT RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually autocalibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room

    3D Multi-user interactive visualization with a shared large-scale display

    Get PDF
    When the multiple users interact with a virtual environment on a largescale display there are several issues that need to be addressed to facilitate the interaction. In the thesis, three main topics for collaborative visualization are discussed; display setup, interactive visualization, and visual fatigue. The problems that the author is trying to address in this thesis are how multiple users can interact with a shared large-scale display depending on the display setups and how they can interact with the shared visualization in a way that doesn’t lead to visual fatigue. The first user study (Chapter 3) explores the display setups for multi-user interaction with a shared large-display. The author describes the design of the three main display setups (a shared view, a split screen, and a split screen with navigation information) and a demonstration using these setups. The user study found that the split screen and the split screen with navigation information can improve users’ confidence and reduce frustration level and are more preferred than a shared view. However, a shared view can still provide effective interaction and collaboration and the display setups cannot have a large impact on usability and workload. From the first study, the author employed a shared view for multi-user interactive visualization with a shared large-scale display due to the advantages of the shared view. To improve interactive visualization with a shared view for multiple users, the author designed and conducted the second user study (Chapter 4). A conventional interaction technique, the mean tracking method, was not effective for more than three users. In order to overcome the limitation of the current multi-user interactive visualization techniques, two interactive visualization techniques (the Object Shift Technique and Activity-based Weighted Mean Tracking method) were developed and were evaluated in the second user study. The Object Shift Technique translates the virtual objects in the opposite direction of movement of the Point of View (PoV) and the Activity-based Weighted Mean Tracking method assigns the higher weight to active users in comparison with stationary users to determine the location of the PoV. The results of the user study showed that these techniques can support collaboration, improve interactivity, and provide similar visual discomfort compared to the conventional method. The third study (Chapter 5) describes how to reduce visual fatigue for 3D stereoscopic visualization with a single point of view (PoV). When multiple users interact with 3D stereoscopic VR using multi-user interactive visualization techniques and they are close to the virtual objects, they can perceive 3D visual fatigue from the large disparity. To reduce the 3D visual fatigue, an Adaptive Interpupillary Distance (Adaptive IPD) adjustment technique was developed. To evaluate the Adaptive IPD method, the author compared to traditional 3D stereoscopic and the monoscopic visualization techniques. Through the user experiments, the author was able to confirm that the proposed method can reduce visual discomfort, yet maintain compelling depth perception as the result provided the most preferable 3D stereoscopic visualization experience. For these studies, the author developed a software framework and designed a set of experiments (Chapter 6). The framework architecture that contains the three main ideas are described. A demonstration application for multidimensional decision making was developed using the framework. The primary contributions of this thesis include a literature review of multiuser interaction with a shared large-scale display, deeper insights into three display setups for multi-user interaction, development of the Object Shift Techniques, the Activity-based Weighted Mean Tracking method, and the Adaptive Interpupillary Distance Adjustment technique, the evaluation of the three novel interaction techniques, development of a framework for supporting a multi-user interaction with a shared large-scale display and its application to multi-dimensional decision making VR system

    Leveraging eXtented Reality & Human-Computer Interaction for User Experi- ence in 360◦ Video

    Get PDF
    EXtended Reality systems have resurged as a medium for work and entertainment. While 360o video has been characterized as less immersive than computer-generated VR, its realism, ease of use and affordability mean it is in widespread commercial use. Based on the prevalence and potential of the 360o video format, this research is focused on improving and augmenting the user experience of watching 360o video. By leveraging knowledge from Extented Reality (XR) systems and Human-Computer Interaction (HCI), this research addresses two issues affecting user experience in 360o video: Attention Guidance and Visually Induced Motion Sickness (VIMS). This research work relies on the construction of multiple artifacts to answer the de- fined research questions: (1) IVRUX, a tool for analysis of immersive VR narrative expe- riences; (2) Cue Control, a tool for creation of spatial audio soundtracks for 360o video, as well as enabling the collection and analysis of captured metrics emerging from the user experience; and (3) VIMS mitigation pipeline, a linear sequence of modules (including optical flow and visual SLAM among others) that control parameters for visual modi- fications such as a restricted Field of View (FoV). These artifacts are accompanied by evaluation studies targeting the defined research questions. Through Cue Control, this research shows that non-diegetic music can be spatialized to act as orientation for users. A partial spatialization of music was deemed ineffective when used for orientation. Addi- tionally, our results also demonstrate that diegetic sounds are used for notification rather than orientation. Through VIMS mitigation pipeline, this research shows that dynamic restricted FoV is statistically significant in mitigating VIMS, while mantaining desired levels of Presence. Both Cue Control and the VIMS mitigation pipeline emerged from a Research through Design (RtD) approach, where the IVRUX artifact is the product of de- sign knowledge and gave direction to research. The research presented in this thesis is of interest to practitioners and researchers working on 360o video and helps delineate future directions in making 360o video a rich design space for interaction and narrative.Sistemas de Realidade EXtendida ressurgiram como um meio de comunicação para o tra- balho e entretenimento. Enquanto que o vídeo 360o tem sido caracterizado como sendo menos imersivo que a Realidade Virtual gerada por computador, o seu realismo, facili- dade de uso e acessibilidade significa que tem uso comercial generalizado. Baseado na prevalência e potencial do formato de vídeo 360o, esta pesquisa está focada em melhorar e aumentar a experiência de utilizador ao ver vídeos 360o. Impulsionado por conhecimento de sistemas de Realidade eXtendida (XR) e Interacção Humano-Computador (HCI), esta pesquisa aborda dois problemas que afetam a experiência de utilizador em vídeo 360o: Orientação de Atenção e Enjoo de Movimento Induzido Visualmente (VIMS). Este trabalho de pesquisa é apoiado na construção de múltiplos artefactos para res- ponder as perguntas de pesquisa definidas: (1) IVRUX, uma ferramenta para análise de experiências narrativas imersivas em VR; (2) Cue Control, uma ferramenta para a criação de bandas sonoras de áudio espacial, enquanto permite a recolha e análise de métricas capturadas emergentes da experiencia de utilizador; e (3) canal para a mitigação de VIMS, uma sequência linear de módulos (incluindo fluxo ótico e SLAM visual entre outros) que controla parâmetros para modificações visuais como o campo de visão restringido. Estes artefactos estão acompanhados por estudos de avaliação direcionados para às perguntas de pesquisa definidas. Através do Cue Control, esta pesquisa mostra que música não- diegética pode ser espacializada para servir como orientação para os utilizadores. Uma espacialização parcial da música foi considerada ineficaz quando usada para a orientação. Adicionalmente, os nossos resultados demonstram que sons diegéticos são usados para notificação em vez de orientação. Através do canal para a mitigação de VIMS, esta pesquisa mostra que o campo de visão restrito e dinâmico é estatisticamente significante ao mitigar VIMS, enquanto mantem níveis desejados de Presença. Ambos Cue Control e o canal para a mitigação de VIMS emergiram de uma abordagem de Pesquisa através do Design (RtD), onde o artefacto IVRUX é o produto de conhecimento de design e deu direcção à pesquisa. A pesquisa apresentada nesta tese é de interesse para profissionais e investigadores tra- balhando em vídeo 360o e ajuda a delinear futuras direções em tornar o vídeo 360o um espaço de design rico para a interação e narrativa

    Distributed Cinema: Interactive, Networked Spectatorship In The Age Of Digital Media

    Get PDF
    Digital media has changed much of how people watch, consume and interact with digital media. The loss of indexicality, or the potential infidelity between an image and its source, contributes to a distrust of images. The ubiquity of interactive media changes aesthetics of images, as viewers begin to expect interactivity. Networked media changes not only the ways in which viewers access media, but also how they communicate with each other about this media. The Tulse Luper Suitcases encapsulates all of these phenomena

    Distributed Cinema: Interactive, Networked Spectatorship In The Age Of Digital Media

    Get PDF
    Digital media has changed much of how people watch, consume and interact with digital media. The loss of indexicality, or the potential infidelity between an image and its source, contributes to a distrust of images. The ubiquity of interactive media changes aesthetics of images, as viewers begin to expect interactivity. Networked media changes not only the ways in which viewers access media, but also how they communicate with each other about this media. The Tulse Luper Suitcases encapsulates all of these phenomena

    Seeing Archaeology In 3D: Digital Spatial Vision

    Full text link
    3D digital archaeology is a growing subfield of archaeological practice. This paper assesses the role 3D archaeology in archaeological theory and practice employs, particularly in reference to the ways of seeing. Digital reconstructions themselves occupy a particular niche as manipulatable representations of archaeological contexts, enabling them to convey information and interpretation in ways previously impossible in the field. Using these new tools allows archaeologists to see spatial data in new ways and to therefore more fully explore and interpret it. Low cost methods of 3D model production, including new commercial structured light scanning device, are employed within previously excavated architectural contexts of ancient Pompeii to explore the feasibility and benefits of 3D archaeology's ways of seeing. 3D archaeology is shown to enable exploratory data analysis throughout the archaeological process

    Where Truth Lies

    Get PDF
    "This boldly original book traces the evolution of documentary film and photography as they migrated onto digital platforms during the first decades of the twenty-first century. Kris Fallon examines the emergence of several key media forms—social networking and crowdsourcing, video games and virtual environments, big data and data visualization—and demonstrates the formative influence of political conflict and the documentary film tradition on their evolution and cultural integration. Focusing on particular moments of political rupture, Fallon argues that ideological rifts inspired the adoption and adaptation of newly available technologies to encourage social mobilization and political action, a function performed for much of the previous century by independent documentary film. Positioning documentary film and digital media side by side in the political sphere, Fallon asserts that “truth” now lies in a new set of media forms and discursive practices that implicitly shape the documentation of everything from widespread cultural spectacles like wars and presidential elections to more invisible or isolated phenomena like the Abu Ghraib torture scandal or the “fake news” debates of 2016. “Looking at a unique and intriguing set of ‘hybrid media,’ Fallon convincingly makes a claim about a change in the form of new media, one linking politics, aesthetics, and technology.” ALEXANDRA JUHASZ, Brooklyn College, CUNY “Where Truth Lies does the difficult and much-needed work of unpacking how the documentary impulse is shifting in the digital age, both through the profound influence of digital aesthetics and computational thinking and through the ways traditional documentary is infusing digital expression.” JENNIFER MALKOWSKI, author of Dying in Full Detail: Mortality and Digital Documentary KRIS FALLON is Assistant Professor of Cinema and Digital Media at the University of California, Davis.

    Ubiquitous interactive displays: magical experiences beyond the screen

    Get PDF
    Ubiquitous Interactive Displays are interfaces that extend interaction beyond traditional flat screens. This thesis presents a series of proof-of-concept systems exploring three interactive displays: the first part of this thesis explores interactive projective displays, where the use of projected light transforms and enhances physical objects in our environment. The second part of this thesis explores gestural displays, where traditional mobile devices such as our smartphones are equipped with depth sensors to enable input and output around a device. Finally, I introduce a new tactile display that imbues our physical spaces with a sense of touch in mid air without requiring the user to wear a physical device. These systems explore a future where interfaces are inherently everywhere, connecting our physical objects and spaces together through visual, gestural and tactile displays. I aim to demonstrate new technical innovations as well as compelling interactions with one ore more users and their physical environment. These new interactive displays enable novel experiences beyond flat screens that blurs the line between the physical and virtual world
    corecore