36 research outputs found

    Press-n-Paste : Copy-and-Paste Operations with Pressure-sensitive Caret Navigation for Miniaturized Surface in Mobile Augmented Reality

    Get PDF
    Publisher Copyright: © 2021 ACM.Copy-and-paste operations are the most popular features on computing devices such as desktop computers, smartphones and tablets. However, the copy-and-paste operations are not sufficiently addressed on the Augmented Reality (AR) smartglasses designated for real-time interaction with texts in physical environments. This paper proposes two system solutions, namely Granularity Scrolling (GS) and Two Ends (TE), for the copy-and-paste operations on AR smartglasses. By leveraging a thumb-size button on a touch-sensitive and pressure-sensitive surface, both the multi-step solutions can capture the target texts through indirect manipulation and subsequently enables the copy-and-paste operations. Based on the system solutions, we implemented an experimental prototype named Press-n-Paste (PnP). After the eight-session evaluation capturing 1,296 copy-and-paste operations, 18 participants with GS and TE achieve the peak performance of 17,574 ms and 13,951 ms per copy-and-paste operation, with 93.21% and 98.15% accuracy rates respectively, which are as good as the commercial solutions using direct manipulation on touchscreen devices. The user footprints also show that PnP has a distinctive feature of miniaturized interaction area within 12.65 mm∗14.48 mm. PnP not only proves the feasibility of copy-and-paste operations with the flexibility of various granularities on AR smartglasses, but also gives significant implications to the design space of pressure widgets as well as the input design on smart wearables.Peer reviewe

    The Use of Multiple Slate Devices to Support Active Reading Activities

    Get PDF
    Reading activities in the classroom and workplace occur predominantly on paper. Since existing electronic devices do not support these reading activities as well as paper, users have difficulty taking full advantage of the affordances of electronic documents. This dissertation makes three main contributions toward supporting active reading electronically. The first contribution is a comprehensive set of active reading requirements, drawn from three decades of research into reading processes. These requirements explain why existing devices are inadequate for supporting active reading activities. The second contribution is a multi-slate reading system that more completely supports the active reading requirements above. Researchers believe the suitability of paper for active reading is largely due to the fact it distributes content across different sheets of paper, which are capable of displaying information as well as capturing input. The multi-slate approach draws inspiration from the independent reading and writing surfaces that paper provides, to blend the beneficial features of e-book readers, tablets, PCs, and tabletop computers. The development of the multi-slate system began with the Dual-Display E-book, which used two screens to provide richer navigation capabilities than a single-screen device. Following the success of the Dual-Display E-book, the United Slates, a general-purpose reading system consisting of an extensible number of slates, was created. The United Slates consisted of custom slate hardware, specialized interactions that enabled the slates to be used cooperatively, and a cloud-based infrastructure that robustly integrated the slates with users' existing computing devices and workflow. The third contribution is a series of evaluations that characterized reading with multiple slates. A laboratory study with 12 participants compared the relative merits of paper and electronic reading surfaces. One month long in-situ deployments of the United Slates with graduate students in the humanities found the multi-slate configuration to be highly effective for reading. The United Slates system delivered desirable paper-like qualities that included enhanced reading engagement, ease of navigation, and peace-of-mind while also providing superior electronic functionality. The positive feedback suggests that the multi-slate configuration is a desirable method for supporting active reading activities

    Personalized Interaction with High-Resolution Wall Displays

    Get PDF
    Fallende Hardwarepreise sowie eine zunehmende Offenheit gegenĂŒber neuartigen InteraktionsmodalitĂ€ten haben in den vergangen Jahren den Einsatz von wandgroßen interaktiven Displays möglich gemacht, und in der Folge ist ihre Anwendung, unter anderem in den Bereichen Visualisierung, Bildung, und der UnterstĂŒtzung von Meetings, erfolgreich demonstriert worden. Aufgrund ihrer GrĂ¶ĂŸe sind Wanddisplays fĂŒr die Interaktion mit mehreren Benutzern prĂ€destiniert. Gleichzeitig kann angenommen werden, dass Zugang zu persönlichen Daten und Einstellungen — mithin personalisierte Interaktion — weiterhin essentieller Bestandteil der meisten AnwendungsfĂ€lle sein wird. Aktuelle Benutzerschnittstellen im Desktop- und Mobilbereich steuern Zugriffe ĂŒber ein initiales Login. Die Annahme, dass es nur einen Benutzer pro Bildschirm gibt, zieht sich durch das gesamte System, und ermöglicht unter anderem den Zugriff auf persönliche Daten und Kommunikation sowie persönliche Einstellungen. Gibt es hingegen mehrere Benutzer an einem großen Bildschirm, mĂŒssen hierfĂŒr Alternativen gefunden werden. Die daraus folgende Forschungsfrage dieser Dissertation lautet: Wie können wir im Kontext von Mehrbenutzerinteraktion mit wandgroßen Displays personalisierte Schnittstellen zur VerfĂŒgung stellen? Die Dissertation befasst sich sowohl mit personalisierter Interaktion in der NĂ€he (mit Touch als EingabemodalitĂ€t) als auch in etwas weiterer Entfernung (unter Nutzung zusĂ€tzlicher mobiler GerĂ€te). Grundlage fĂŒr personalisierte Mehrbenutzerinteraktion sind technische Lösungen fĂŒr die Zuordnung von Benutzern zu einzelnen Interaktionen. Hierzu werden zwei Alternativen untersucht: In der ersten werden Nutzer via Kamera verfolgt, und in der zweiten werden MobilgerĂ€te anhand von Ultraschallsignalen geortet. Darauf aufbauend werden Interaktionstechniken vorgestellt, die personalisierte Interaktion unterstĂŒtzen. Diese nutzen zusĂ€tzliche MobilgerĂ€te, die den Zugriff auf persönliche Daten sowie Interaktion in einigem Abstand von der Displaywand ermöglichen. Einen weiteren Teil der Arbeit bildet die Untersuchung der praktischen Auswirkungen der Ausgabe- und InteraktionsmodalitĂ€ten fĂŒr personalisierte Interaktion. Hierzu wird eine qualitative Studie vorgestellt, die Nutzerverhalten anhand des kooperativen Mehrbenutzerspiels Miners analysiert. Der abschließende Beitrag beschĂ€ftigt sich mit dem Analyseprozess selber: Es wird das Analysetoolkit fĂŒr Wandinteraktionen GIAnT vorgestellt, das Nutzerbewegungen, Interaktionen, und Blickrichtungen visualisiert und dadurch die Untersuchung der Interaktionen stark vereinfacht.An increasing openness for more diverse interaction modalities as well as falling hardware prices have made very large interactive vertical displays more feasible, and consequently, applications in settings such as visualization, education, and meeting support have been demonstrated successfully. Their size makes wall displays inherently usable for multi-user interaction. At the same time, we can assume that access to personal data and settings, and thus personalized interaction, will still be essential in most use-cases. In most current desktop and mobile user interfaces, access is regulated via an initial login and the complete user interface is then personalized to this user: Access to personal data, configurations and communications all assume a single user per screen. In the case of multiple people using one screen, this is not a feasible solution and we must find alternatives. Therefore, this thesis addresses the research question: How can we provide personalized interfaces in the context of multi-user interaction with wall displays? The scope spans personalized interaction both close to the wall (using touch as input modality) and further away (using mobile devices). Technical solutions that identify users at each interaction can replace logins and enable personalized interaction for multiple users at once. This thesis explores two alternative means of user identification: Tracking using RGB+depth-based cameras and leveraging ultrasound positioning of the users' mobile devices. Building on this, techniques that support personalized interaction using personal mobile devices are proposed. In the first contribution on interaction, HyDAP, we examine pointing from the perspective of moving users, and in the second, SleeD, we propose using an arm-worn device to facilitate access to private data and personalized interface elements. Additionally, the work contributes insights on practical implications of personalized interaction at wall displays: We present a qualitative study that analyses interaction using a multi-user cooperative game as application case, finding awareness and occlusion issues. The final contribution is a corresponding analysis toolkit that visualizes users' movements, touch interactions and gaze points when interacting with wall displays and thus allows fine-grained investigation of the interactions

    PAPIERCRAFT: A PAPER-BASED INTERFACE TO SUPPORT INTERACTION WITH DIGITAL DOCUMENTS

    Get PDF
    Many researchers extensively interact with documents using both computers and paper printouts, which provide an opposite set of supports. Paper is comfortable to read from and write on, and it is flexible to be arranged in space; computers provide an efficient way to archive, transfer, search, and edit information. However, due to the gap between the two media, it is difficult to seamlessly integrate them together to optimize the user's experience of document interaction. Existing solutions either sacrifice inherent paper flexibility or support very limited digital functionality on paper. In response, we have proposed PapierCraft, a novel paper-based interface that supports rich digital facilities on paper without sacrificing paper's flexibility. By employing the emerging digital pen technique and multimodal pen-top feedback, PapierCraft allows people to use a digital pen to draw gesture marks on a printout, which are captured, interpreted, and applied to the corresponding digital copy. Conceptually, the pen and the paper form a paper-based computer, able to interact with other paper sheets and computing devices for operations like copy/paste, hyperlinking, and web searches. Furthermore, it retains the full range of paper advantages through the light-weighted, pen-paper-only interface. By combining the advantages of paper and digital media and by supporting the smooth transition between them, PapierCraft bridges the paper-computer gap. The contributions of this dissertation focus on four respects. First, to accommodate the static nature of paper, we proposed a pen-gesture command system that does not rely on screen-rendered feedback, but rather on the self-explanatory pen ink left on the paper. Second, for more interactive tasks, such as searching for keywords on paper, we explored pen-top multimodal (e.g. auditory, visual, and tactile) feedback that enhances the command system without sacrificing the inherent paper flexibility. Third, we designed and implemented a multi-tier distributed infrastructure to map pen-paper interactions to digital operations and to unify document interaction on paper and on computers. Finally, we systematically evaluated PapierCraft through three lab experiments and two application deployments in the areas of field biology and e-learning. Our research has demonstrated the feasibility, usability, and potential applications of the paper-based interface, shedding light on the design of the future interface for digital document interaction. More generally, our research also contributes to ubiquitous computing, mobile interfaces, and pen-computing

    Personal clipboards for individual copy-and-paste on shared multi-user surfaces

    No full text
    Clipboards are omnipresent on today's personal computing platforms. They provide copy-and-paste functionalities that let users easily reorganize information and quickly transfer data across applications. In this work, we introduce personal clipboards to multi-user surfaces. Personal clipboards enable individual and independent copy-and-paste operations, in the presence of multiple users concurrently sharing the same direct-touch interface. As common surface computing platforms do not distinguish touch input of different users, we have developed clipboards that leverage complementary personalization strategies. Specifically, we have built a context menu clipboard based on implicit user identification of every touch, a clipboard based on personal subareas dynamically placed on the surface, and a handheld clipboard based on integration of personal devices for surface interaction. In a user study, we demonstrate the effectiveness of personal clipboards for shared surfaces, and show that different personalization strategies enable clipboards, albeit with different impacts on interaction characteristics

    MIDAS: Multi-device Integrated Dynamic Activity Spaces

    Get PDF
    Mobile phones, tablet computers, laptops, desktops, and large screen displays are increasingly available to individuals for information access, often simultaneously. Dominant content access protocols, such as HTTP/1.1, do not take advantage of this device multiplicity and support information access from single devices only. Changing devices means restarting an information session. Using devices in conjunction with each other poses several challenges, which include the presentation of content on devices with diverse form factors and propagation of the content changes across these devices. In this dissertation, I report on the design and implementation of MIDAS - architecture and a prototype system for multi-device presentations. I propose a framework, called 12C, for characterizing multi-device systems and evaluate MIDAS within this framework. MIDAS is designed as a middleware that can work with multiple client-server architectures, such as the Web and context-aware Trellis, a non-Web hypertext system. It presents information content simultaneously on devices with diverse characteristics without requiring sensor-enhanced environments. The system adapts content elements for optimal presentation on the target device while also striving to retain fidelity with the original form from a human perceptual perspective. MIDAS reconfigures its presentation in response to user actions, availability of devices, and environmental context, such as a user's location or the time of day. I conducted a pilot study that explored human perception of similarity when image attributes such as size and color depth are modified in the process of presenting images on different devices. The results indicated that users tend to prefer scaling of images to color-depth reduction but gray scaling of images is preferable to either modification. Not all images scale equally gracefully; those dominated by natural elements or manmade structures scale exceptionally well. Images that depict recognizable human faces or textual elements should be scaled only to an extent that these features retain their integrity. Attributes of the 12C framework describe aspects of multi-device systems that include infrastructure, presentation, interaction, interface, and security. Based on these criteria, MIDAS is a flexible infrastructure, which lends itself to several content distribution and interaction strategies by separating client- and server-side configuration

    Understanding and Measuring Privacy and Security Assertions of Mobile and VR Applications

    Get PDF
    The emergence of the COVID-19 pandemic has catalysed a profound transformation in the way mobile applications are utilised and engaged with by consumers. There has been a noticeable surge in people relying on applications for various purposes such as entertainment, remote work, and daily activities. These services collect large amounts of users’ personal information and use them in many areas, such as in medical and financial systems, but they also pose an unprecedented threat to users’ privacy and security. Many international jurisdictions have enacted privacy laws and regulations to restrict the behaviour of apps and define the obligations of app developers. Although various privacy assertions are required in app stores, such as the permission list and the privacy policies, it is usually difficult for regular users to understand the potential threats the app may pose, let alone identify undesired or malicious application behaviours. In this thesis, I have developed a comprehensive framework to assess the current privacy practices of mobile applications. The framework first establishes a knowledge base (including datasets) to model privacy and security assertions. It then builds a sound evaluation system to analyse the privacy practices of mobile applications. Large-scale privacy evaluations were conducted on different realworld datasets, including privacy policies, contact tracing apps, and children’s apps, with the aim of revealing the risks associated with mobile application privacy. Lastly, a novel approach to applying differential privacy on streamed spatial data in VR applications is proposed. This thesis provides a comprehensive guideline for the mobile software industry and legislators to build a stronger and safer privacy ecosystem.Thesis (Ph.D.) -- University of Adelaide, School of Computer and Mathematical Sciences, 202

    Augmented analyses: supporting the study of ubiquitous computing systems

    Get PDF
    Ubiquitous computing is becoming an increasingly prevalent part of our everyday lives. The reliance of society upon such devices as mobile phones, coupled with the increasing complexity of those devices is an example of how our everyday human-human interaction is affected by this phenomenon. Social scientists studying human-human interaction must now take into account the effects of these technologies not just on the interaction itself, but also on the approach required to study it. User evaluation is a challenging topic in ubiquitous computing. It is generally considered to be difficult, certainly more so than in previous computational settings. Heterogeneity in design, distributed and mobile users, invisible sensing systems and so on, all add up to render traditional methods of observation and evaluation insufficient to construct a complete view of interactional activity. These challenges necessitate the development of new observational technologies. This thesis explores some of those challenges and demonstrates that system logs, with suitable methods of synchronising, filtering and visualising them for use in conjunction with more traditional observational approaches such as video, can be used to overcome many of these issues. Through a review of both the literature of the field, and the state of the art of computer aided qualitative data analysis software (CAQDAS), a series of guidelines are constructed showing what would be required of a software toolkit to meet the challenges of studying ubiquitous computing systems. It outlines the design and implementation of two such software packages, \textit{Replayer} and \textit{Digital Replay System}, which approach the problem from different angles, the former being focussed on visualising and exploring the data in system logs and the latter focussing on supporting the methods used by social scientists to perform qualitative analyses. The thesis shows through case studies how this technique can be applied to add significant value to the qualitative analysis of ubiquitous computing systems: how the coordination of system logs and other media can help us find information in the data that would otherwise be inaccessible; an ability to perform studies in locations/settings that would otherwise be impossible, or at least very difficult; and how creating accessible qualitative data analysis tools allows people to study particular settings or technologies who could not have studied them before. This software aims to demonstrate the direction in which other CAQDAS packages may have to move in order to support the study of the characteristics of human-computer and human-human interaction in a world increasingly reliant upon ubiquitous computing technology
    corecore