6 research outputs found

    Viewport- and World-based Personal Device Point-Select Interactions in the Augmented Reality

    Get PDF
    Personal smart devices have demonstrated a variety of efficient techniques for pointing and selecting on physical displays. However, when migrating these input techniques to augmented reality, it is both unclear what the relative performance of different techniques will be given the immersive nature of the environment, and it is unclear how viewport-based versus world-based pointing methods will impact performance. To better understand the impact of device and viewing perspectives on pointing in augmented reality, in this thesis, we present the results of two controlled experiments comparing pointing conditions that leverage various smartphone- and smartwatch-based external display pointing techniques and examine viewport-based versus world-based target acquisition paradigms. Our results demonstrate that viewport-based techniques offer faster selection and that both smartwatch- and smartphone-based pointing techniques represent high-performance options for performing distant target acquisition tasks in augmented reality

    Barehand Mode Switching in Touch and Mid-Air Interfaces

    Get PDF
    Raskin defines a mode as a distinct setting within an interface where the same user input will produce results different to those it would produce in other settings. Most interfaces have multiple modes in which input is mapped to different actions, and, mode-switching is simply the transition from one mode to another. In touch interfaces, the current mode can change how a single touch is interpreted: for example, it could draw a line, pan the canvas, select a shape, or enter a command. In Virtual Reality (VR), a hand gesture-based 3D modelling application may have different modes for object creation, selection, and transformation. Depending on the mode, the movement of the hand is interpreted differently. However, one of the crucial factors determining the effectiveness of an interface is user productivity. Mode-switching time of different input techniques, either in a touch interface or in a mid-air interface, affects user productivity. Moreover, when touch and mid-air interfaces like VR are combined, making informed decisions pertaining to the mode assignment gets even more complicated. This thesis provides an empirical investigation to characterize the mode switching phenomenon in barehand touch-based and mid-air interfaces. It explores the potential of using these input spaces together for a productivity application in VR. And, it concludes with a step towards defining and evaluating the multi-faceted mode concept, its characteristics and its utility, when designing user interfaces more generally

    Immersive Participation:Futuring, Training Simulation and Dance and Virtual Reality

    Get PDF
    Dance knowledge can inform the development of scenario design in immersive digital simulation environments by strengthening a participant’s capacity to learn through the body. This study engages with processes of participatory practice that question how the transmission and transfer of dance knowledge/embodied knowledge in immersive digital environments is activated and applied in new contexts. These questions are relevant in both arts and industry and have the potential to add value and knowledge through crossdisciplinary collaboration and exchange. This thesis consists of three different research projects all focused on observation, participation, and interviews with experts on embodiment in digital simulation. The projects were chosen to provide a range of perspectives across dance, industry and futures studies. Theories of embodied cognition, in particular the notions of the extended body, distributed cognition, enactment and mindfulness, offer critical lenses through which to explore the relationship of embodied integration and participation within immersive digital environments. These areas of inquiry lead to the consideration of how language from the field of computer science can assist in describing somatic experience in digital worlds through a discussion of the emerging concepts of mindfulness, wayfinding, guided movement and digital kinship. These terms serve as an example of how the mutability of language became part of the process as terms applied in disparate disciplines were understood within varying contexts. The analytic tools focus on applying a posthuman view, speculation through a futures ethnography, and a cognitive ethnographical approach to my research project. These approaches allowed me to examine an ecology of practices in order to identify methods and processes that can facilitate the transmission and transfer of embodied knowledge within a community of practice. The ecological components include dance, healthcare, transport, education and human/computer interaction. These fields drove the data collection from a range of sources including academic papers, texts, specialists’ reports, scientific papers, interviews and conversations with experts and artists.The aim of my research is to contribute both a theoretical and a speculative understanding of processes, as well as tools applicable in the transmission of embodied knowledge in virtual dance and arts environments as well as digital simulation across industry. Processes were understood theoretically through established studies in embodied cognition applied to workbased training, reinterpreted through my own movement study. Futures methodologies paved the way for speculative processes and analysis. Tools to choreograph scenario design in immersive digital environments were identified through the recognition of cross purpose language such as mindfulness, wayfinding, guided movement and digital kinship. Put together, the major contribution of this research is a greater understanding of the value of dance knowledge applied to simulation developed through theoretical and transformational processes and creative tools

    Use of Landmarks to Improve Spatial Learning and Revisitation in Computer Interfaces

    Get PDF
    Efficient spatial location learning and remembering are just as important for two-dimensional Graphical User Interfaces (GUI) as they are for real environments where locations are revisited multiple times. Rapid spatial memory development in GUIs, however, can be difficult because these interfaces often lack adequate landmarks that have been predominantly used by people to learn and recall real-life locations. In the absence of sufficient landmarks in GUIs, artificially created visual objects (i.e., artificial landmarks) could be used as landmarks to support spatial memory development of spatial locations. In order to understand how spatial memory development occurs in GUIs and explore ways to assist users’ efficient location learning and recalling in GUIs, I carried out five studies exploring the use of landmarks in GUIs – one study that investigated interfaces of four standard desktop applications: Microsoft Word, Facebook, Adobe Photoshop, and Adobe Reader, and other four that tested artificial landmarks augmented two prototype desktop GUIs against non-landmarked versions: command selection interfaces and linear document viewers; in addition, I tested landmarks’ use in variants of these interfaces that varied in the number of command sets (small, medium, and large) and types of linear documents (textual and video). Results indicate that GUIs’ existing features and design elements can be reliable landmarks in GUIs that provide spatial benefits similar to real environments. I also show that artificial landmarks can significantly improve spatial memory development of GUIs, allowing support for rapid spatial location learning and remembering in GUIs. Overall, this dissertation reveals that landmarks can be a valuable addition to graphical systems to improve the memorability and usability of GUIs

    Using proprioception to support menu item selection in virtual reality

    Get PDF
    Dissertation (MIS (Multimedia))--University of Pretoria, 2023.There is an abundance of literature that informs menu system design, specifically for the context of a two-dimensional flat monitor display. These guidelines that are used to inform menu system design used in two-dimensional flat monitor displays were reconsidered to identify criteria that can inform the design of a menu system used in a three-dimensional (3D) virtual environment that makes use of immersive virtual reality technology. Considering the immersive nature of such technologies, it can be hypothesized that proprioception, a sense used to establish awareness of objects and space in a physical environment, can be transferred into the virtual environment to guide menu item selection. Various properties of menu system design were investigated to identify properties that can be used together with proprioception to support menu item selection. Further investigation to understand the usage of proprioception in a 3D virtual environment revealed that spatial awareness and memory needs to be established first. Therefore, criteria that inform the design of menu item selection to be supported by proprioception needed to take this fact into consideration as well. Consequently, a menu system was designed and developed based on the identified criteria to test its feasibility to inform the design of a menu system in a 3D virtual environment that enables users to rely on non-visual senses to guide their selections. The system was designed and developed using commercially available hardware and software to ensure that the findings of this study can be accessible to the general public. The results of this study identified that participants were able to establish spatial awareness and develop familiarity with the 3D virtual environment, therefore enabling them to make use of proprioception, along with their visual senses and haptic feedback, to improve their ability to select menu items. The results also revealed that participants had varying levels of relying on visual guidance for menu item selection and that the varying levels of reliance were based on personal preference.Information ScienceMIS (Multimedia)UnrestrictedFaculty of Engineering, Built Environment and Information TechnologySDG-04:Quality Educatio
    corecore