845 research outputs found

    Investigating Mobile Device-Based Interaction Techniques for Collocated Merging

    Get PDF
    In mixed-focus collaboration, group members create content both individually as a kind of groundwork for discussion and further processing as well as directly together in group work sessions. In case of individual creation, separate documents and contents need to be merged to receive an overall solution. In our work, we focus on mixed-focus collaboration using mobile devices, especially smartphones, to create and merge content. Instead of using emails or messenger services to share content within a group, we describe three different mobile device-based interaction techniques for merging that use built-in sensors to enable ad-hoc collaboration and that are easy and eyes-free to perform. We conducted a user study to investigate these merging interactions. Overall, 21 participants tested the interactions and evaluated task load and User Experience (UX) of the proposed device-based interactions. Furthermore, they compared the interactions with a common way to share content, namely writing an email to send attached content. Participants gave valuable user feedback and stated that our merging interaction techniques were much easier to perform. Furthermore, we found that they were much faster, less demanding, and had a greater UX than email

    A Design Kit for Mobile Device-Based Interaction Techniques

    Get PDF
    Beside designing the graphical interface of mobile applications, mobile phones and their built-in sensors enable various possibilities to engage with digital content in a physical, device-based manner that move beyond the screen content. So-called mobile device-based interactions are characterized by device movements and positions as well as user actions in real space. So far, there is only little guidance available for novice designers and developers to ideate and design new solutions for specic individual or collaborative use cases. Hence, the potential for designing mobile-based interactions is seldom fully exploited. To address this issue, we propose a design kit for mobile device-based interaction techniques following a morphological approach. Overall, the kit comprises seven dimensions with several elements that can be easily combined with each other to form an interaction technique by selecting at least one entry of each dimension. The design kit can be used to support designers in exploring novel mobile interaction techniques to specic interaction problems in the ideation phase of the design process but also in the analysis of existing device-based interaction solutions

    Designing for Cross-Device Interactions

    Get PDF
    Driven by technological advancements, we now own and operate an ever-growing number of digital devices, leading to an increased amount of digital data we produce, use, and maintain. However, while there is a substantial increase in computing power and availability of devices and data, many tasks we conduct with our devices are not well connected across multiple devices. We conduct our tasks sequentially instead of in parallel, while collaborative work across multiple devices is cumbersome to set up or simply not possible. To address these limitations, this thesis is concerned with cross-device computing. In particular it aims to conceptualise, prototype, and study interactions in cross-device computing. This thesis contributes to the field of Human-Computer Interaction (HCI)—and more specifically to the area of cross-device computing—in three ways: first, this work conceptualises previous work through a taxonomy of cross-device computing resulting in an in-depth understanding of the field, that identifies underexplored research areas, enabling the transfer of key insights into the design of interaction techniques. Second, three case studies were conducted that show how cross-device interactions can support curation work as well as augment users’ existing devices for individual and collaborative work. These case studies incorporate novel interaction techniques for supporting cross-device work. Third, through studying cross-device interactions and group collaboration, this thesis provides insights into how researchers can understand and evaluate multi- and cross-device interactions for individual and collaborative work. We provide a visualization and querying tool that facilitates interaction analysis of spatial measures and video recordings to facilitate such evaluations of cross-device work. Overall, the work in this thesis advances the field of cross-device computing with its taxonomy guiding research directions, novel interaction techniques and case studies demonstrating cross-device interactions for curation, and insights into and tools for effective evaluation of cross-device systems

    Expanding social mobile games beyond the device screen

    Get PDF
    Emerging pervasive games use sensors, graphics and networking technologies to provide immersive game experiences integrated with the real world. Existing pervasive games commonly rely on a device screen for providing game-related information, while overlooking opportunities to include new types of contextual interactions like jumping, a punching gesture, or even voice to be used as game inputs. We present the design of Spellbound, a physical mobile team-based game, to help contribute to our understanding of how we can design pervasive games that aim to nurture a spirit of togetherness. We also briefly touch upon how togetherness and playfulness can transform physical movement into a desirable activity in the user evaluation section. Spellbound is an outdoor pervasive team-based physical game. It takes advantage of the above-mentioned opportunities and integrates real-world actions like jumping and spinning with a virtual world. It also replaces touch-based input with voice interaction and provides glanceable and haptic feedback using custom hardware in the true spirit of social play characteristic of traditional children’s games. We believe Spellbound is a form of digital outdoor gaming that anchors enjoyment on physical action, social interaction, and tangible feedback. Spellbound was well received in user evaluation playtests which confirmed that the main design objective of enhancing a sense of togetherness was largely met

    MarathOn Multiscreen: group television watching and interaction in a viewing ecology

    Get PDF
    This paper reports and discusses the findings of an exploratory study into collaborative user practice with a multiscreen television application. MarathOn Multiscreen allows users to view, share and curate amateur and professional video footage of a community marathon event. Our investigations focused on collaborative sharing practices across different viewing activities and devices, the roles taken by different devices in a viewing ecology, and observations on how users consume professional and amateur content. Our Work uncovers significant differences in user behaviour and collaboration when engaged in more participatory viewing activities, such as sorting and ranking footage, which has implications for awareness of other users’ interactions while viewing together and alone. In addition, user appreciation and use of amateur video content is dependent not only on quality and activity but their personal involvement in the contents

    Aeronautical engineering: A special bibliography with indexes, supplement 82, April 1977

    Get PDF
    This bibliography lists 311 reports, articles, and other documents introduced into the NASA scientific and technical information system in March 1977

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    Development and evaluation of mixed reality-enhanced robotic systems for intuitive tele-manipulation and telemanufacturing tasks in hazardous conditions

    Get PDF
    In recent years, with the rapid development of space exploration, deep-sea discovery, nuclear rehabilitation and management, and robotic-assisted medical devices, there is an urgent need for humans to interactively control robotic systems to perform increasingly precise remote operations. The value of medical telerobotic applications during the recent coronavirus pandemic has also been demonstrated and will grow in the future. This thesis investigates novel approaches to the development and evaluation of a mixed reality-enhanced telerobotic platform for intuitive remote teleoperation applications in dangerous and difficult working conditions, such as contaminated sites and undersea or extreme welding scenarios. This research aims to remove human workers from the harmful working environments by equipping complex robotic systems with human intelligence and command/control via intuitive and natural human-robot- interaction, including the implementation of MR techniques to improve the user's situational awareness, depth perception, and spatial cognition, which are fundamental to effective and efficient teleoperation. The proposed robotic mobile manipulation platform consists of a UR5 industrial manipulator, 3D-printed parallel gripper, and customized mobile base, which is envisaged to be controlled by non-skilled operators who are physically separated from the robot working space through an MR-based vision/motion mapping approach. The platform development process involved CAD/CAE/CAM and rapid prototyping techniques, such as 3D printing and laser cutting. Robot Operating System (ROS) and Unity 3D are employed in the developing process to enable the embedded system to intuitively control the robotic system and ensure the implementation of immersive and natural human-robot interactive teleoperation. This research presents an integrated motion/vision retargeting scheme based on a mixed reality subspace approach for intuitive and immersive telemanipulation. An imitation-based velocity- centric motion mapping is implemented via the MR subspace to accurately track operator hand movements for robot motion control, and enables spatial velocity-based control of the robot tool center point (TCP). The proposed system allows precise manipulation of end-effector position and orientation to readily adjust the corresponding velocity of maneuvering. A mixed reality-based multi-view merging framework for immersive and intuitive telemanipulation of a complex mobile manipulator with integrated 3D/2D vision is presented. The proposed 3D immersive telerobotic schemes provide the users with depth perception through the merging of multiple 3D/2D views of the remote environment via MR subspace. The mobile manipulator platform can be effectively controlled by non-skilled operators who are physically separated from the robot working space through a velocity-based imitative motion mapping approach. Finally, this thesis presents an integrated mixed reality and haptic feedback scheme for intuitive and immersive teleoperation of robotic welding systems. By incorporating MR technology, the user is fully immersed in a virtual operating space augmented by real-time visual feedback from the robot working space. The proposed mixed reality virtual fixture integration approach implements hybrid haptic constraints to guide the operator’s hand movements following the conical guidance to effectively align the welding torch for welding and constrain the welding operation within a collision-free area. Overall, this thesis presents a complete tele-robotic application space technology using mixed reality and immersive elements to effectively translate the operator into the robot’s space in an intuitive and natural manner. The results are thus a step forward in cost-effective and computationally effective human-robot interaction research and technologies. The system presented is readily extensible to a range of potential applications beyond the robotic tele- welding and tele-manipulation tasks used to demonstrate, optimise, and prove the concepts

    Merging the Real and the Virtual: An Exploration of Interaction Methods to Blend Realities

    Get PDF
    We investigate, build, and design interaction methods to merge the real with the virtual. An initial investigation looks at spatial augmented reality (SAR) and its effects on pointing with a real mobile phone. A study reveals a set of trade-offs between the raycast, viewport, and direct pointing techniques. To further investigate the manipulation of virtual content within a SAR environment, we design an interaction technique that utilizes the distance that a user holds mobile phone away from their body. Our technique enables pushing virtual content from a mobile phone to an external SAR environment, interact with that content, rotate-scale-translate it, and pull the content back into the mobile phone. This is all done in a way that ensures seamless transitions between the real environment of the mobile phone and the virtual SAR environment. To investigate the issues that occur when the physical environment is hidden by a fully immersive virtual reality (VR) HMD, we design and investigate a system that merges a realtime 3D reconstruction of the real world with a virtual environment. This allows users to freely move, manipulate, observe, and communicate with people and objects situated in their physical reality without losing their sense of immersion or presence inside a virtual world. A study with VR users demonstrates the affordances provided by the system and how it can be used to enhance current VR experiences. We then move to AR, to investigate the limitations of optical see-through HMDs and the problem of communicating the internal state of the virtual world with unaugmented users. To address these issues and enable new ways to visualize, manipulate, and share virtual content, we propose a system that combines a wearable SAR projector. Demonstrations showcase ways to utilize the projected and head-mounted displays together, such as expanding field of view, distributing content across depth surfaces, and enabling bystander collaboration. We then turn to videogames to investigate how spectatorship of these virtual environments can be enhanced through expanded video rendering techniques. We extract and combine additional data to form a cumulative 3D representation of the live game environment for spectators, which enables each spectator to individually control a personal view into the stream while in VR. A study shows that users prefer spectating in VR when compared with a comparable desktop rendering
    corecore