63 research outputs found

    Understanding Context to Capture when Reconstructing Meaningful Spaces for Remote Instruction and Connecting in XR

    Full text link
    Recent technological advances are enabling HCI researchers to explore interaction possibilities for remote XR collaboration using high-fidelity reconstructions of physical activity spaces. However, creating these reconstructions often lacks user involvement with an overt focus on capturing sensory context that does not necessarily augment an informal social experience. This work seeks to understand social context that can be important for reconstruction to enable XR applications for informal instructional scenarios. Our study involved the evaluation of an XR remote guidance prototype by 8 intergenerational groups of closely related gardeners using reconstructions of personally meaningful spaces in their gardens. Our findings contextualize physical objects and areas with various motivations related to gardening and detail perceptions of XR that might affect the use of reconstructions for remote interaction. We discuss implications for user involvement to create reconstructions that better translate real-world experience, encourage reflection, incorporate privacy considerations, and preserve shared experiences with XR as a medium for informal intergenerational activities.Comment: 26 pages, 5 figures, 4 table

    Usability framework for mobile augmented reality language learning

    Get PDF
    After several decades since its introduction, the existing ISO9241-11 usability framework is still vastly used in Mobile Augmented Reality (MAR) language learning. The existing framework is generic and can be applied to diverse emerging technologies such as electronic and mobile learning. However, technologies like MAR have interaction properties that are significantly unique and require different usability processes. Hence, implementing the existing framework on MAR can lead to non-optimized, inefficient, and ineffective outcomes. Furthermore, state-of-the-art analysis models such as machine learning are not apparent in MAR usability studies, despite evidence of positive outcomes in other learning technologies. In recent MAR learning studies, machine learning benefits such as problem identification and prioritization were non-existent. These setbacks could slow down the advancement of MAR language learning, which mainly aims to improve language proficiency among MAR users, especially in English communication. Therefore, this research proposed the Usability Framework for MAR (UFMAR) that addressed the currently identified research problems and gaps in language learning. UFMAR introduced an improved data collection method called Individual Interaction Clustering-based Usability Measuring Instrument (IICUMI), followed by a machine learning-driven analysis model called Clustering-based Usability Prioritization Analysis (CUPA) and a prioritization quantifier called Usability Clustering Prioritization Model (UCPM). UFMAR showed empirical evidence of significantly improving usability in MAR, capitalizing on its unique interaction properties. UFMAR enhanced the existing framework with new abilities to systematically identify and prioritize MAR usability issues. Through the experimental results of UFMAR, it was found that the IICUMI method was 50% more effective, while CUPA and UCPM were 57% more effective than the existing framework. The outcome through UFMAR also produced 86% accuracy in analysis results and was 79% more efficient in framework implementation. UFMAR was validated through three cycles of the experimental processes, with triangulation through expert reviews, to be proven as a fitting framework for MAR language learning

    On-the-fly dense 3D surface reconstruction for geometry-aware augmented reality.

    Get PDF
    Augmented Reality (AR) is an emerging technology that makes seamless connections between virtual space and the real world by superimposing computer-generated information onto the real-world environment. AR can provide additional information in a more intuitive and natural way than any other information-delivery method that a human has ever in- vented. Camera tracking is the enabling technology for AR and has been well studied for the last few decades. Apart from the tracking problems, sensing and perception of the surrounding environment are also very important and challenging problems. Although there are existing hardware solutions such as Microsoft Kinect and HoloLens that can sense and build the environmental structure, they are either too bulky or too expensive for AR. In this thesis, the challenging real-time dense 3D surface reconstruction technologies are studied and reformulated for the reinvention of basic position-aware AR towards geometry-aware and the outlook of context- aware AR. We initially propose to reconstruct the dense environmental surface using the sparse point from Simultaneous Localisation and Map- ping (SLAM), but this approach is prone to fail in challenging Minimally Invasive Surgery (MIS) scenes such as the presence of deformation and surgical smoke. We subsequently adopt stereo vision with SLAM for more accurate and robust results. With the success of deep learning technology in recent years, we present learning based single image re- construction and achieve the state-of-the-art results. Moreover, we pro- posed context-aware AR, one step further from purely geometry-aware AR towards the high-level conceptual interaction modelling in complex AR environment for enhanced user experience. Finally, a learning-based smoke removal method is proposed to ensure an accurate and robust reconstruction under extreme conditions such as the presence of surgical smoke

    A context-aware method for authentically simulating outdoors shadows for mobile augmented reality

    No full text
    Visual coherence between virtual and real objects is a major issue in creating convincing augmented reality (AR) applications. To achieve this seamless integration, actual light conditions must be determined in real time to ensure that virtual objects are correctly illuminated and cast consistent shadows. In this paper, we propose a novel method to estimate daylight illumination and use this information in outdoor AR applications to render virtual objects with coherent shadows. The illumination parameters are acquired in real time from context-aware live sensor data. The method works under unprepared natural conditions. We also present a novel and rapid implementation of a state-of-the-art skylight model, from which the illumination parameters are derived. The Sun's position is calculated based on the user location and time of day, with the relative rotational differences estimated from a gyroscope, compass and accelerometer. The results illustrated that our method can generate visually credible AR scenes with consistent shadows rendered from recovered illumination.This work was supported in part by the Fundacao para a Ciencia e Tecnologia - FCT, under the grant number SFRH/BD/73129/2010, and the European Union (COMPETE, QREN and FEDER), under the project REC I/EEI-SII/0360/2012 "MASSIVE-Multimodal Acknowledgeable MultiSenSorial Immersive Virtual Environments". This work is also supported by the project "TEC4Growth - Pervasive Intelligence, Enhancers and Proofs of Concept with Industrial Impact/NORTE-01-0145-FEDER-000020" is financed by the North Portugal Regional Operational Programme (NORTE 2020), under the PORTUGAL 2020 Partnership Agreement, and through the European Regional Development Fund (ERDF).info:eu-repo/semantics/publishedVersio

    The Shadow Space of Allegorical Machines: Situating Locative Media

    Get PDF
    This dissertation utilizes a media archaeological approach to the analysis of locative media, which are technologies that organize an experience of spatial orientation. For instance, a user can use a mobile phone to connect to a cellular network and generate a visualization of the material space in which he or she is positioned with annotated or interactive information on the screen. My critical approach to locative media is influenced by a historical constellation of orientation technologies, their contributions to the social imaginations of space, and the resulting experiences and expectations that are negotiated by the material, symbolic, and ideal. Four case studies on the astrolabe, magnetic compass, divining rod, and digital locative media make up a broader historical arrangement of which, I argue, digital locative media are the latest manifestation. Like other media technologies such as radio or television, these spatial technologies offer a window onto another world while also offering (other)spaces of symbolic and cultural codes that are layered over material space. The ability to reveal these otherspaces is associated with the recurring transcendent logic of locative media as individuals are encouraged to unveil the real behind the apparent in order to become united with a hybrid (and enchanted) ecology of the virtual and real. My locative media archaeology involves a theorization of allegorical machines, which is a term I use to analyze the interfaced interpretation of a shadow (imagined or informational) otherspace in relation to a porous correspondence between subject and space. This theorization is an interrogation of how engineers, technological promoters, and users position allegorical machines as making the supersensible sensible through an interface with the sublime. In other words, locative media are technological attempts to make the vague intelligible by bringing what lies outside the realm of physical experience into contact with the senses. Transcending to otherspaces such as the electromagnetic spectrum or the digital network involves an inherent metaphysics of the interface, which as liaisons between bodies and spaces generate animations such as the one that is the focus of this dissertation: the sublime desire or fear of unveiling the unknown space beyond space.Doctor of Philosoph

    METROPOLITAN ENCHANTMENT AND DISENCHANTMENT. METROPOLITAN ANTHROPOLOGY FOR THE CONTEMPORARY LIVING MAP CONSTRUCTION

    Get PDF
    We can no longer interpret the contemporary metropolis as we did in the last century. The thought of civil economy regarding the contemporary Metropolis conflicts more or less radically with the merely acquisitive dimension of the behaviour of its citizens. What is needed is therefore a new capacity for imagining the economic-productive future of the city: hybrid social enterprises, economically sustainable, structured and capable of using technologies, could be a solution for producing value and distributing it fairly and inclusively. Metropolitan Urbanity is another issue to establish. Metropolis needs new spaces where inclusion can occur, and where a repository of the imagery can be recreated. What is the ontology behind the technique of metropolitan planning and management, its vision and its symbols? Competitiveness, speed, and meritocracy are political words, not technical ones. Metropolitan Urbanity is the characteristic of a polis that expresses itself in its public places. Today, however, public places are private ones that are destined for public use. The Common Good has always had a space of representation in the city, which was the public space. Today, the Green-Grey Infrastructure is the metropolitan city's monument that communicates a value for future generations and must therefore be recognised and imagined; it is the production of the metropolitan symbolic imagery, the new magic of the city

    Facilitating the development of location-based experiences

    Get PDF
    Location-based experiences depend on the availability and reliability of wireless infrastructures such as GPS, Wi-Fi, or mobile phone networks; but these technologies are not universally available everywhere and anytime. Studies of deployed experiences have shown that the characteristics of wireless infrastructures, especially their limited coverage and accuracy, have a major impact on the performance of an experience. It is in the designersā€™ interest to be aware of technological restrictions to their work. Current state of the art authoring tools for location-based experiences implement one common overarching model: the idea of taking a map of the physical area in which the experience is to take place and then somehow placing virtual trigger zones on top of it. This model leaves no space for technological shortcomings and assumes a perfect registration between the real and the virtual. In order to increase the designersā€™ awareness of the technology, this thesis suggests revealing the wireless infrastructures at authoring time through appropriate tools and workflows. This is thought to aid the designers in better understanding the characteristics of the underlying technology and thereby enable them to deal with potential problems before their work is deployed to the public. This approach was studied in practice by working with two groups of professional artists who built two commercially commissioned location-based experiences, and evaluated using qualitative research methods. The first experience is a pervasive game for mobile phones called ā€˜Love Cityā€™ that relies on cellular positioning. The second experience is a pervasive game for cyclists called ā€˜Rider Spokeā€™ that relies on Wi-Fi positioning. The evaluation of these two experiences revealed the importance of an integrated suite of tools that spans indoors and outdoors, and which supports the designers in better understanding the location mechanism that they decided to work with. It was found that designers can successfully create their experiences to deal with patchy, coarse grained, and varying wireless networks as long as they are made aware of the characteristics
    • ā€¦
    corecore