706 research outputs found

    Investigating the characteristics of unistroke gestures using a mobile game

    Get PDF
    Touch gestures are today’s main input method for the interaction with smart phones. In particular, unistroke gestures, which are gestures consisting of one articulated line, are commonly used for text input (touch keyboards) and short cuts referring to functions on mobile devices. This work investigates the user’s accuracy of articulating unistroke touch gestures on mobile devices. Therefore, two studies were conducted which focused on the user’s ability to perceive different variations of unistroke gestures and reproduce them accurately. First, a control study aimed at analyzing the user’s touch accuracy during the articulation of single line and composed line gestures by varying gesture properties like the rotation or angles within the gestures. To analyze the infl uences of the user’s distraction in his natural environment, a large scale study was conducted which used a mobile game as apparatus. The game was designed in a way that the user was motivated to articulate the given gestures precisely. The game principle was based on the control study procedure of perceiving and reproducing gestures. To gather a great amount of touch samples the game was published on a mobile app store and the samples were collected trough the internet. The analysis of both studies showed that the gestures orientation and the angles within the gestures affected the articulation accuracy in terms of the deviations made by bending, rotating, and varying the shape of the gestures. Furthermore, the gestures articulated in the game study tended to be more error-prone compared to those being articulated in the control study.Touch-Gesten sind momentan die am weitesten verbreitete Eingabemethode für die Interaktion mit Smartphones. Insbesondere Unistroke-Gesten, welche aus einem einzelnen Strich bestehen, werden für die Texteingabe mittels Touch-Keyboards und als Shortcut für Funktion auf mobilen Geräten verwendet. Im Rahmen dieser Arbeit wird die Genauigkeit untersucht, mit der der Nutzer Unistroke-Gesten auf mobilen Geräten ausführt. Zu diesem Zweck wurden zwei Studien durchgeführt, die die Fähigkeit des Nutzers untersuchen, verschiedene vorgegebene Gesten wahrzunehmen und diese zu reproduzieren. Eine Kontroll-Studie zielte darauf ab zu untersuchen, wie genau der Nutzer Gesten wiedergeben kann, die aus einzelnen geraden Linien oder aus zusammengesetzten geraden Linien bestehen wobei die Winkel und die Längen dieser Linien in dem Experiment variiert wurden. Um die Ablenkung durch äußeren Einflüsse im gewohnten Nutzerumfeld zu untersuchen, wurde eine größer angelegte Studie mittels eines Mobile-Games durchgeführt. Das Spiel wurde so entworfen, dass der Nutzer motiviert war die vorgegebenen Gesten möglichst genau durchzuführen. Basis für das Spielprinzip war die Kontroll-Studie, bei der die Probanden die Gesten wahrnehmen und reproduzieren mussten. Um eine große Menge an Touch-Proben zu sammeln wurde das Spiel in einem App-Store für mobile Geräte veröffentlicht und die Ergebnisse über das Internet eingesammelt. Die Analyse beider Studien (Kontroll-Studie und Spiel-Studie) ergab, dass die Rotation und die Winkel innerhalb einer Geste eine Auswirkung auf die Genauigkeit der Ausführung haben. Dieser Effekt wurde an Variationen in der Biegung, der Winkel und der Form der Gesten beobachtet. Dabei wurden die Gesten von den Nutzern des Spiels (d.h. in der Spiel-Studie) ungenauer durchgeführt als in der Kontroll-Studie

    StableHand VR: a virtual reality serious game for hand rehabilitation

    Get PDF
    Dissertação de mestrado integrado em Biomedical Engineering Medical InformaticsA third of all injuries at work are sustained to the hand, and hand and wrist injuries are estimated to account between 10% to 30% of all Emergency Department (ED) attendances. In 2017, there were approximately 18 million hand and wrist fractures, 2 million thumb amputations and 4 million non-thumb digit amputations worldwide. Several injuries, disabilities and diseases can affect manual motor control. Hand physiotherapy is indispensable to restore hand functionality. However, this process is often a strenuous and cognitively demanding experience. This work proposes a Virtual Reality (VR) serious game to improve conventional physiotherapy in hand rehabilitation. It focuses on resolving recurring limitations reported in most technological solutions to the problem, namely the limited diversity support of movements and exercises, complicated calibrations, and exclusion of patients with open wounds or other disfigurements of the hand. Concepts such as mixed reality, serious games for health, and hand rehabilitation are addressed in this dissertation to provide the reader with a background for the project. The latest developments of digital games and technologies in the hand rehabilitation field, specifications, requirements, general game characteristics and the most relevant details of the game implementation process are also presented in this dissertation. The system was assessed in two mid-term validations to test its viability and adjust the development. The first validation was performed with eight able-bodied participants and the second with four health professionals working in the rehabilitation field. The validations were performed following ten minutes of guided functional task practices followed by a Semi-Structured Interview for the first validation and an online questionnaire for the second validation. The questions made in the interview and online questionnaire focused on the participants’ familiarity with videogames, opinion about the Oculus Quest and its hand tracking system, and the StableHand VR game. The System Usability Scale (SUS) scores obtained and the participants’ positive feedback showed the potential of both conceptual and technological approaches adopted for this game as a viable complement to conventional hand rehabilitation. The project’s main objectives were achieved, and several relevant topics for further research were identified.Um terço de todos os ferimentos no trabalho afetam a mão e estima-se que 10% a 30% de todos os atendimentos nas Urgências se devem a ferimentos na mão e no pulso. Em 2017, houve aproximadamente 18 milhões de fraturas da mão e do pulso, 2 milhões de amputações do polegar e 4 milhões de amputações de dígitos não polegares em todo o mundo. Vários ferimentos, deficiências e doenças podem afetar o controlo motor manual. A fisioterapia é indispensável para recuperar a funcionalidade da mão. No entanto, este processo é frequentemente uma experiência extenuante e cognitivamente exigente. Este trabalho propõe um jogo sério em Realidade Virtual para melhorar a fisioterapia convencional na reabilitação da mão. O trabalho desenvolvido concentra-se na resolução de recorrentes limitações relatadas na maioria das soluções tecnológicas para o problema, nomeadamente o apoio limitado de diversidade de movimentos e exercícios, calibrações complicadas e exclusão de pacientes com feridas abertas ou outras desfigurações da mão. Esta dissertação aborda conceitos como a realidade mista, jogos sérios para a saúde e reabilitação para fornecer ao leitor contextualização para o projeto. Os últimos desenvolvimentos de jogos digitais e tecnologias no campo da reabilitação da mão são também apresentados nesta dissertação, assim como especificações, requisitos, características gerais do jogo e o processo de implementação do mesmo. O sistema foi avaliado através de dois ensaios realizados durante o processo de desenvolvimento, para testar a viabilidade e proceder a ajustes da solução especificada. A primeira validação foi conduzida com oito participantes saudáveis e a segunda validação com quatro profissionais de saúde que trabalham em reabilitação. As validações foram realizadas após dez minutos de práticas funcionais orientadas, seguidas de uma Entrevista Semiestruturada, no caso da primeira validação, ou de um questionário online, no caso da segunda validação. As perguntas feitas na entrevista e no questionário online centraram-se na familiaridade dos participantes com os videojogos, opinião sobre o Oculus Quest e o seu sistema de localização de mãos e o jogo StableHand VR. As pontuações obtidas no System Usability Scale e o feedback positivo dos participantes demostrou o potencial das abordagens conceptuais e tecnológicas adotadas para que este jogo fosse visto como um complemento viável para a reabilitação convencional das mãos. Os principais objetivos do projeto foram alcançados, tendo também sido identificado um conjunto de tópicos relevantes de investigação futura

    Understanding user interactions in stereoscopic head-mounted displays

    Get PDF
    2022 Spring.Includes bibliographical references.Interacting in stereoscopic head mounted displays can be difficult. There are not yet clear standards for how interactions in these environments should be performed. In virtual reality there are a number of well designed interaction techniques; however, augmented reality interaction techniques still need to be improved before they can be easily used. This dissertation covers work done towards understanding how users navigate and interact with virtual environments that are displayed in stereoscopic head-mounted displays. With this understanding, existing techniques from virtual reality devices can be transferred to augmented reality where appropriate, and where that is not the case, new interaction techniques can be developed. This work begins by observing how participants interact with virtual content using gesture alone, speech alone, and the combination of gesture+speech during a basic object manipulation task in augmented reality. Later, a complex 3-dimensional data-exploration environment is developed and refined. That environment is capable of being used in both augmented reality (AR) and virtual reality (VR), either asynchronously or simultaneously. The process of iteratively designing that system and the design choices made during its implementation are provided for future researchers working on complex systems. This dissertation concludes with a comparison of user interactions and navigation in that complex environment when using either an augmented or virtual reality display. That comparison contributes new knowledge on how people perform object manipulations between the two devices. When viewing 3D visualizations, users will need to feel able to navigate the environment. Without careful attention to proper interaction technique design, people may struggle to use the developed system. These struggles may range from a system that is uncomfortable and not fit for long-term use, or they could be as major as causing new users to not being able to interact in these environments at all. Getting the interactions right for AR and VR environments is a step towards facilitating their widespread acceptance. This dissertation provides the groundwork needed to start designing interaction techniques around how people utilize their personal space, virtual space, body, tools, and feedback systems

    Creating mobile gesture-based interaction design patterns for older adults : a study of tap and swipe gestures with portuguese seniors

    Get PDF
    Tese de mestrado. Multimédia. Faculdade de Engenharia. Universidade do Porto. 201

    A computational approach to gestural interactions of the upper limb on planar surfaces

    Get PDF
    There are many compelling reasons for proposing new gestural interactions: one might want to use a novel sensor that affords access to data that couldn’t be previously captured, or transpose a well-known task into a different unexplored scenario. After an initial design phase, the creation, optimisation or understanding of new interactions remains, however, a challenge. Models have been used to foresee interaction properties: Fitts’ law, for example, accurately predicts movement time in pointing and steering tasks. But what happens when no existing models apply? The core assertion to this work is that a computational approach provides frameworks and associated tools that are needed to model such interactions. This is supported through three research projects, in which discriminative models are used to enable interactions, optimisation is included as an integral part of their design and reinforcement learning is used to explore motions users produce in such interactions

    Exploring the Multi-touch Interaction Design Space for 3D Virtual Objects to Support Procedural Training Tasks

    Get PDF
    Multi-touch interaction has the potential to be an important input method for realistic training in 3D environments. However, multi-touch interaction has not been explored much in 3D tasks, especially when trying to leverage realistic, real-world interaction paradigms. A systematic inquiry into what realistic gestures look like for 3D environments is required to understand how users translate real-world motions to multi-touch motions. Once those gestures are defined, it is important to see how we can leverage those gestures to enhance training tasks. In order to explore the interaction design space for 3D virtual objects, we began by conducting our first study exploring user-defined gestures. From this work we identified a taxonomy and design guidelines for 3D multi-touch gestures and how perspective view plays a role in the chosen gesture. We also identified a desire to use pressure on capacitive touch screens. Since the best way to implement pressure still required some investigation, our second study evaluated two different pressure estimation techniques in two different scenarios. Once we had a taxonomy of gestures we wanted to examine whether implementing these realistic multi-touch interactions in a training environment provided training benefits. Our third study compared multi-touch interaction to standard 2D mouse interaction and to actual physical training and found that multi-touch interaction performed better than 2D mouse and as well as physical training. This study showed us that multi-touch training using a realistic gesture set can perform as well as training on the actual apparatus. One limitation of the first training study was that the user had constrained perspective to allow for us to focus on isolating the gestures. Since users can change their perspective in a real life training scenario and therefore gain spatial knowledge of components, we wanted to see if allowing users to alter their perspective helped or hindered training. Our final study compared training with Unconstrained multi-touch interaction, Constrained multi-touch interaction, or training on the actual physical apparatus. Results show that the Unconstrained multi-touch interaction and the Physical groups had significantly better performance scores than the Constrained multi-touch interaction group, with no significant difference between the Unconstrained multi-touch and Physical groups. Our results demonstrate that allowing users more freedom to manipulate objects as they would in the real world benefits training. In addition to the research already performed, we propose several avenues for future research into the interaction design space for 3D virtual objects that we believe will be of value to researchers and designers of 3D multi-touch training environments

    Understanding Mode and Modality Transfer in Unistroke Gesture Input

    Get PDF
    Unistroke gestures are an attractive input method with an extensive research history, but one challenge with their usage is that the gestures are not always self-revealing. To obtain expertise with these gestures, interaction designers often deploy a guided novice mode -- where users can rely on recognizing visual UI elements to perform a gestural command. Once a user knows the gesture and associated command, they can perform it without guidance; thus, relying on recall. The primary aim of my thesis is to obtain a comprehensive understanding of why, when, and how users transfer from guided modes or modalities to potentially more efficient, or novel, methods of interaction -- through symbolic-abstract unistroke gestures. The goal of my work is to not only study user behaviour from novice to more efficient interaction mechanisms, but also to expand upon the concept of intermodal transfer to different contexts. We garner this understanding by empirically evaluating three different use cases of mode and/or modality transitions. Leveraging marking menus, the first piece investigates whether or not designers should force expertise transfer by penalizing use of the guided mode, in an effort to encourage use of the recall mode. Second, we investigate how well users can transfer skills between modalities, particularly when it is impractical to present guidance in the target or recall modality. Lastly, we assess how well users' pre-existing spatial knowledge of an input method (the QWERTY keyboard layout), transfers to performance in a new modality. Applying lessons from these three assessments, we segment intermodal transfer into three possible characterizations -- beyond the traditional novice to expert contextualization. This is followed by a series of implications and potential areas of future exploration spawning from our work

    An analysis of interaction in the context of wearable computers

    Get PDF
    The focus of this thesis is on the evaluation of input modalities for generic input tasks, such inputting text and pointer based interaction. In particular, input systems that can be used within a wearable computing system are examined in terms of human-wearable computer interaction. The literature identified a lack of empirical research into the use of input devices for text input and pointing, when used as part of a wearable computing system. The research carried out within this thesis took an approach that acknowledged the movement condition of the user of a wearable system, and evaluated the wearable input devices while the participants were mobile and stationary. Each experiment was based on the user's time on task, their accuracy, and a NASA TLX assessment which provided the participant's subjective workload. The input devices assessed were 'off the shelf' systems. These were chosen as they are readily available to a wider range of users than bespoke inpu~ systems. Text based input was examined first. The text input systems evaluated were: a keyboard,; an on-screen keyboard, a handwriting recognition system, a voice 'recognition system and a wrist- keyboard (sometimes known as a wrist-worn keyboard). It was found that the most appropriate text input system to use overall, was the handwriting recognition system, (This is forther explored in the discussion of Chapters three and seven.) The text input evaluations were followed by a series of four experiments that examined pointing devices, and assessed their appropriateness as part of a wearable computing system. The devices were; an off-table mouse, a speech recognition system, a stylus and a track-pad. These were assessed in relation to the following generic pointing tasks: target acquisition, dragging and dropping, and trajectory-based interaction. Overall the stylus was found to be the most appropriate input device for use with a wearable system, when used as a pointing device. (This isforther covered in Chapters four to six.) By completing this series of experiments, evidence has been scientifically established that can support both a wearable computer designer and a wearable user's choice of input device. These choices can be made in regard to generic interface task activities such as: inputting text, target acquisition, dragging and dropping and trajectory-based interaction.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Tools in and out of sight : an analysis informed by Cultural-Historical Activity Theory of audio-haptic activities involving people with visual impairments supported by technology

    Get PDF
    The main purpose of this thesis is to present a Cultural-Historical Activity Theory (CHAT) based analysis of the activities conducted by and with visually impaired users supported by audio-haptic technology.This thesis covers several studies conducted in two projects. The studies evaluate the use of audio-haptic technologies to support and/or mediate the activities of people with visual impairment. The focus is on the activities involving access to two-dimensional information, such as pictures or maps. People with visual impairments can use commercially available solutions to explore static information (raised lined maps and pictures, for example). Solu-tions for dynamic access, such as drawing a picture or using a map while moving around, are more scarce. Two distinct projects were initiated to remedy the scarcity of dynamic access solutions, specifically focusing on two separate activities.The first project, HaptiMap, focused on pedestrian outdoors navigation through audio feedback and gestures mediated by a GPS equipped mobile phone. The second project, HIPP, focused on drawing and learning about 2D representations in a school setting with the help of haptic and audio feedback. In both cases, visual feedback was also present in the technology, enabling people with vision to take advantage of that modality too.The research questions addressed are: How can audio and haptic interaction mediate activ-ities for people with visual impairment? Are there features of the programming that help or hinder this mediation? How can CHAT, and specifically the Activity Checklist, be used to shape the design process, when designing audio haptic technology together with persons with visual impairments?Results show the usefulness of the Activity Checklist as a tool in the design process, and provide practical application examples. A general conclusion emphasises the importance of modularity, standards, and libre software in rehabilitation technology to support the development of the activities over time and to let the code evolve with them, as a lifelong iterative development process. The research also provides specific design recommendations for the design of the type of audio haptic systems involved
    corecore