149 research outputs found

    Side Pressure for Bidirectional Navigation on Small Devices

    Get PDF
    International audienceVirtual navigation on a mobile touchscreen is usually performed using finger gestures: drag and flick to scroll or pan, pinch to zoom. While easy to learn and perform, these gestures cause significant occlusion of the display. They also require users to explicitly switch between navigation mode and edit mode to either change the viewport's position in the document, or manipulate the actual content displayed in that viewport, respectively. SidePress augments mobile devices with two continuous pressure sensors co-located on one of their sides. It provides users with generic bidirectional navigation capabilities at different levels of granularity, all seamlessly integrated to act as an alternative to traditional navigation techniques, including scrollbars, drag-and-flick, or pinch-to-zoom. We describe the hardware prototype, detail the associated interaction vocabulary for different applications, and report on two laboratory studies. The first shows that users can precisely and efficiently control SidePress; the second, that SidePress can be more efficient than drag-and-flick touch gestures when scrolling large documents

    Creating mobile gesture-based interaction design patterns for older adults : a study of tap and swipe gestures with portuguese seniors

    Get PDF
    Tese de mestrado. Multimédia. Faculdade de Engenharia. Universidade do Porto. 201

    Sensor-based user interface concepts for continuous, around-device and gestural interaction on mobile devices

    Get PDF
    A generally observable trend of the past 10 years is that the amount of sensors embedded in mobile devices such as smart phones and tablets is rising steadily. Arguably, the available sensors are mostly underutilized by existing mobile user interfaces. In this dissertation, we explore sensor-based user interface concepts for mobile devices with the goal of making better use of the available sensing capabilities on mobile devices as well as gaining insights on the types of sensor technologies that could be added to future mobile devices. We are particularly interested how novel sensor technologies could be used to implement novel and engaging mobile user interface concepts. We explore three particular areas of interest for research into sensor-based user interface concepts for mobile devices: continuous interaction, around-device interaction and motion gestures. For continuous interaction, we explore the use of dynamic state-space systems to implement user interfaces based on a constant sensor data stream. In particular, we examine zoom automation in tilt-based map scrolling interfaces. We show that although fully automatic zooming is desirable in certain situations, adding a manual override capability of the zoom level (Semi-Automatic Zooming) will increase the usability of such a system, as shown through a decrease in task completion times and improved user ratings of user study. The presented work on continuous interaction also highlights how the sensors embedded in current mobile devices can be used to support complex interaction tasks. We go on to introduce the concept of Around-Device Interaction (ADI). By extending the interactive area of the mobile device to its entire surface and the physical volume surrounding it we aim to show how the expressivity and possibilities of mobile input can be improved this way. We derive a design space for ADI and evaluate three prototypes in this context. HoverFlow is a prototype allowing coarse hand gesture recognition around a mobile device using only a simple set of sensors. PalmSpace a prototype exploring the use of depth cameras on mobile devices to track the user's hands in direct manipulation interfaces through spatial gestures. Lastly, the iPhone Sandwich is a prototype supporting dual-sided pressure-sensitive multi-touch interaction. Through the results of user studies, we show that ADI can lead to improved usability for mobile user interfaces. Furthermore, the work on ADI contributes suggestions for the types of sensors could be incorporated in future mobile devices to expand the input capabilities of those devices. In order to broaden the scope of uses for mobile accelerometer and gyroscope data, we conducted research on motion gesture recognition. With the aim of supporting practitioners and researchers in integrating motion gestures into their user interfaces at early development stages, we developed two motion gesture recognition algorithms, the $3 Gesture Recognizer and Protractor 3D that are easy to incorporate into existing projects, have good recognition rates and require a low amount of training data. To exemplify an application area for motion gestures, we present the results of a study on the feasibility and usability of gesture-based authentication. With the goal of making it easier to connect meaningful functionality with gesture-based input, we developed Mayhem, a graphical end-user programming tool for users without prior programming skills. Mayhem can be used to for rapid prototyping of mobile gestural user interfaces. The main contribution of this dissertation is the development of a number of novel user interface concepts for sensor-based interaction. They will help developers of mobile user interfaces make better use of the existing sensory capabilities of mobile devices. Furthermore, manufacturers of mobile device hardware obtain suggestions for the types of novel sensor technologies that are needed in order to expand the input capabilities of mobile devices. This allows the implementation of future mobile user interfaces with increased input capabilities, more expressiveness and improved usability

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Investigating New Forms of Single-handed Physical Phone Interaction with Finger Dexterity

    Get PDF
    With phones becoming more powerful and such an essential part of our lives, manufacturers are creating new device forms and interactions to better support even more diverse functions. A common goal is to enable a larger input space and expand the input vocabulary using new physical phone interactions other than touchscreen input. This thesis explores how utilizing our hand and finger dexterity can expand physical phone interactions. To understand how we can physically manipulate a phone using the fine motor skills of finger, we identify and evaluate single-handed "dexterous gestures". Four manipulations are defined: shift, spin (yaw axis), rotate (roll axis) and flip (pitch axis), with a formative survey showing all except flip have been performed for various reasons. A controlled experiment examines the speed, behaviour, and preference of manipulations in the form of dexterous gestures, by considering two directions and two movement magnitudes. Using a heuristic recognizer for spin, rotate, and flip, a one-week usability experiment finds increased practice and familiarity improve the speed and comfort of dexterous gestures. With the confirmation that users can loosen their grip and perform gestures with finger dexterity, we investigate the performance of one-handed touch input on the side of a mobile phone. An experiment examines grip change and subjective preference when reaching for side targets using different fingers. Two following experiments examine taps and flicks using the thumb and index finger in a new two-dimensional input space. We simulate a side-touch sensor with a combination of capacitive sensing and motion tracking to distinguish touches on the lower, middle, or upper edges. We further focus on physical phone interaction with a new phone form factor by exploring and evaluating single-handed folding interactions suitable for "modern flip phones": smartphones with a bendable full screen touch display. Three categories of interactions are identified: only-fold, touch-enhanced fold, and fold-enhanced touch; in which gestures are created using fold direction, fold magnitude, and touch position. A prototype evaluation device is built to resemble current flip phones, but with a modified spring system to enable folding in both directions. A study investigates performance and preference for 30 fold gestures, revealing which are most promising. Overall, our exploration shows that users can loosen their grip to physically interact with phones in new ways, and these interactions could be practically integrated into daily phone applications

    Pressure as a non-dominant hand input modality for bimanual interaction techniques on touchscreen tablets

    Get PDF
    Touchscreen tablet devices present an interesting challenge to interaction designers: they are not quite handheld like their smartphone cousins, though their form factor affords usage away from the desktop and other surfaces, requires a user to support a larger weight and navigate more screen space. Thus, the repertoire of touch input techniques is often reduced to those performable with one hand. Previous studies have suggested there are bimanual interaction techniques that offer both manual and cognitive benefits over equivalent unimanual techniques and that pressure is useful as a primary input modality on mobile devices and as an augmentation to finger/stylus input on touchscreens. However, there has been no research on the use of pressure as a modality to expand the range of bimanual input techniques on tablet devices. The first two experiments investigated bimanual scrolling on tablet devices, based on the premise that the control of scrolling speed and vertical scrolling direction could be thought of as separate tasks and that the current status quo of combining both into a single one- handed (unimanual) gesture on a touchscreen or on physical dial can be improved upon. Four bimanual scrolling techniques were compared to two status quo unimanual scrolling techniques in a controlled linear targeting task. The Dial and Slider bimanual technique was superior to the others in terms of Movement Time and the Dial and Pressure bimanual technique was superior in terms of Subjective Workload, suggesting that the bimanual scrolling techniques are better than the status quo unimanual techniques in terms of both performance and preference. The same interaction techniques were then evaluated using a photo browsing task that was chosen to resemble the way people browse their music collections when they are unsure about what they are looking for. These studies demonstrated that pressure is a more effective auxiliary modality than a touch slider in the context of bimanual scrolling techniques. These studies also demonstrated that the bimanual techniques did not provide any concrete benefits over the Unimanual touch scrolling technique, which is the status quo scrolling technique on commercially available touchscreen tablets and smartphones, in the context of an image browsing task. A novel investigation of pressure input was presented where it was characterised as a transient modality, one that has a natural inverse, bounce-back and a state that only persists during interaction. Two studies were carried out investigating the precision of applied pressure as part of a bimanual interaction, where the selection event is triggered by the dominant hand on the touchscreen (using existing touchscreen input gestures) with the goal of study- ing pressure as a functional primitive, without implying any particular application. Two aspects of pressure input were studied – pressure Targeting and Maintaining pressure over time. The results demonstrated that, using a combination of non-dominant hand pressure and dominant-hand touchscreen taps, overall pressure targeting accuracy was high (93.07%). For more complicated dominant-hand input techniques (swipe, pinch and rotate gestures), pressure targeting accuracy was still high (86%). The results demonstrated that participants were able to achieve high levels of pressure accuracy (90.3%) using DH swipe gestures (the simplest gesture in the study) suggesting that the ability to perform a simultaneous combination of pressure and touchscreen gesture input depends on the complexity of the dominant hand action involved. This thesis provides the first detailed study of the use of non-dominant hand pressure input to enable bimanual interaction techniques for tablet devices. It explores the use of pressure as a modality that can expand the range of available bimanual input techniques while the user is seated and comfortably holding the device and offers designers guidelines for including pressure as a non-dominant hand input modality for bimanual interaction techniques, in a way that supplements existing dominant-hand action

    Improving digital object handoff using the space above the table

    Get PDF
    Object handoff – that is, passing an object or tool to another person – is an extremely common activity in collaborative tabletop work. On digital tables, object handoff is typically accomplished by sliding the object on the table surface – but surface-only interactions can be slow and error-prone, particularly when there are multiple people carrying out multiple handoffs. An alternative approach is to use the space above the table for object handoff; this provides more room to move, but requires above-surface tracking. I developed two above-the-surface handoff techniques that use simple and inexpensive tracking: a force-field technique that uses a depth camera to determine hand proximity, and an electromagnetic-field technique called ElectroTouch that provides positive indication when people touch hands over the table. These new techniques were compared to three kinds of existing surface-only handoff (sliding, flicking, and surface-only Force-Fields). The study showed that the above-surface techniques significantly improved both speed and accuracy, and that ElectroTouch was the best technique overall. Also, as object interactions are moved above-the-surface of the table the representation of off-table objects becomes crucial. To address the issue of off-table digital object representation several object designs were created an evaluated. The result of the present research provides designers with practical new techniques for substantially increasing performance and interaction richness on digital tables

    Designing wearable interfaces for blind people

    Get PDF
    Tese de mestrado, Engenharia Informática (Arquitectura, Sistemas e Redes de Computadores), Universidade de Lisboa, faculdade de Ciências, 2015Hoje em dia os dispositivos com ecrã táctil, estão cada vez mais onipresentes. Até recentemente, a maioria dos ecrãs sensíveis ao toque forneciam poucos recursos de acessibilidade para deficientes visuais, deixando-os inutilizáveis. Sendo uma tecnologia tão presente no nosso quotidiano, como em telemóveis e tablets. Estes dispositivos são cada vez mais essenciais para a nossa vida, uma vez que, guardam muita informação pessoal, por exemplo, o pagamento através carteiras eletrónicas. A falta de acessibilidade deste tipo de ecrãs devem-se ao facto de estas interfaces serem baseadas no que os utilizadores veem no ecrã e em tocar no conteúdo apresentado neste. Isso torna-se num grande problema quando uma pessoa deficiente visual tenta usar estas interfaces. No mercado existem algumas soluções mas são quase todas baseadas em retorno áudio. Esta solução não é a melhor quando se trata de informação pessoal que a pessoa deseja manter privada. Por exemplo quando um utilizador está num autocarro e recebe uma mensagem, esta é lida por um leitor de ecrã através das colunas do dispositivo. Esta solução é prejudicial para a privacidade do utilizador, pois todas a pessoas `a sua volta irão ouvir o conteúdo da mensagem. Uma solução para este problema, poderá ser a utilização de vibração e de teclas físicas, que retiram a necessidade da utilização de leitores de ecrã. Contudo, para a navegação em menus a problemática mantém-se. Uma maneira de resolver este problema é através da utilização de uma interface baseada em gestos. Este tipo de interface é uma forma flexível e intuitiva de interação com este dispositivos. Até hoje, muitas abordagens têm vindo a apresentar soluções, no entanto não resolvem todos os pontos referidos. De uma maneira ou de outra estas abordagens terão de ser complementadas com outros dispositivos. Guerreiro e colegas (2012), apresentaram um protótipo que possibilita a leitura texto através de vibração, mas todo o impacto de uma utilização no dia a dia não é tido em conta. Um outro estudo realizado por Myung-Chul Cho (2002) apresenta um par de luvas para escrita codificada pelo alfabeto Braile, contudo não é testado para uma utilização com integração de uma componente de leitura, sem ser o retorno áudio. Dois outros estudos destacam-se, relativamente à utilização de gestos para navegação no dispositivo. Ruiz (2011), efetuou uma elicitação de gestos no ar, no entanto, eles não incluem pessoas invisuais no estudo, o que poderá levar à exclusão de tais utilizadores. Outro estudo apresentado por Kane (2011), inclui pessoas invisuais e destina-se a interações com gestos mas exigindo contacto físico com os ecrãs tácteis. A abordagem apresentada neste estudo integra as melhores soluções apresentadas num único dispositivo. O nosso objectivo principal é tornar os dispositivos de telemóveis mais acessíveis a pessoas invisuais, de forma serem integrados no seu quotidiano. Para isso, desenvolvemos uma interface baseada num par de luvas. O utilizador pode usá-las e com elas ler e escrever mensagens e ainda fazer gestos para outras tarefas. Este par de luvas aproveita o conhecimento sobre Braille por parte dos utilizadores para ler e escrever informação textual. Para a característica de leitura instalámos seis motores de vibração nos dedos da luva, no dedo indicador, no dedo do meio e no dedo anelar, de ambas as mãos. Estes motores simulam a configuração das teclas de uma máquina de escrever Braille, por exemplo, a Perkins Brailler. Para a parte de escrita, instalámos botões de pressão na ponta destes mesmos dedos, sendo cada um representante de um ponto de uma célula de Braille. Para a detecção de gestos optámos por uma abordagem através de um acelerómetro. Este encontra-se colocado nas costas da mão da luva. Para uma melhor utilização a luva é composta por duas camadas, e desta forma é possível instalar todos os componente entre as duas camadas de tecido, permitindo ao utilizador calçar e descalçar as luvas sem se ter que preocupar com os componentes eletrónicos. A construção das luvas assim como todos os testes realizados tiveram a participação de um grupo de pessoas invisuais, alunos e professores, da Fundação Raquel e Martin Sain. Para avaliarmos o desempenho do nosso dispositivo por invisuais realizámos alguns teste de recepcão (leitura) e de envio de mensagens (escrita). No teste de leitura foi realizado com um grupo apenas de pessoas invisuais. O teste consistiu em, receber letras em Braille, onde o utilizador replicava as vibrações sentidas, com os botões das luvas. Para isso avaliámos as taxas de reconhecimento de caracteres. Obtivemos uma média de 31 %, embora estes resultados sejam altamente dependentes das habilidades dos utilizadores. No teste de escrita, foi pedido uma letra ao utilizador e este escrevia em braille utilizando as luvas. O desempenho nesta componente foi em média 74 % de taxa de precisão. A maioria dos erros durante este teste estão ligados a erros, onde a diferença entre a palavra inicial e a escrita pelo utilizador, é de apenas um dedo. Estes testes foram bastante reveladores, relativamente à possível utilização destas luvas por pessoas invisuais. Indicaram-nos que os utilizadores devem ser treinados previamente para serem maximizados os resultados, e que pode ser necessário um pouco de experiencia com o dispositivo. O reconhecimento de gestos permite ao utilizador executar várias tarefas com um smartphone, tais como, atender/rejeitar uma chamada e navegar em menus. Para avaliar que gestos os utilizadores invisuais e normovisuais sugerem para a execução de tarefas em smartphones, realizámos um estudo de elicitação. Este estudo consiste em pedir aos utilizadores que sugiram gestos para a realização de tarefas. Descobrimos que a maioria dos gestos inventados pelos participantes tendem a ser físicos, em contexto, discreto e simples, e que utilizam apenas um ´unico eixo espacial. Concluímos também que existe um consenso, entre utilizadores, para todas as tarefas propostas. Além disso, o estudo de elicitação revelou que as pessoas invisuais preferem gestos mais simples, opondo-se a uma preferência por gestos mais complexos por parte de pessoas normovisuais. Sendo este um dispositivo que necessita de treino para reconhecimento de gestos, procurámos saber qual o tipo de treino é mais indicado para a sua utilização. Com os resultados obtidos no estudo de elicitação, comparámos treinos dos utilizadores individuais, treinos entre as das populações (invisuais e normovisuais) e um treino com ambas as populações (global). Descobrimos que um treino personalizado, ou seja, feito pelo próprio utilizador, é muito mais eficaz que um treino da população e um treino global. O facto de o utilizador poder enviar e receber mensagens, sem estar dependente de vários dispositivos e/ou aplicações contorna, as tão levantadas, questões de privacidade. Com o mesmo dispositivo o utilizador pode, ainda, navegar nos menus do seu smartphone, através de gestos simples e intuitivos. Os nossos resultados sugerem que será possível a utilização de um dispositivo wearable, no seio da comunidade invisual. Com o crescimento exponencial do mercado wearable e o esforço que a comunidade académica está a colocar nas tecnologias de acessibilidade, ainda existe uma grande margem para melhorar. Com este projeto, espera-se que os dispositivos portáteis de apoio irão desempenhar um papel importante na integração social das pessoas com deficiência, criando com isto uma sociedade mais igualitária e justa.Nowadays touch screens are ubiquitous, present in almost all modern devices. Most touch screens provide few accessibility features for blind people, leaving them partly unusable. There are some solutions, based on audio feedback, that help blind people to use touch screens in their daily tasks. The problem with those solutions raises privacy issues, since the content on screen is transmitted through the device speakers. Also, these screen readers make the interaction slow, and they are not easy to use. The main goal of this project is to develop a new wearable interface that allows blind people to interact with smartphones. We developed a pair of gloves that is capable to recognise mid-air gestures, and also allows the input and output of text. To evaluate the usability of input and output, we conducted a user study to assess character recognition and writing performance. Character recognition rates were highly user-dependent, and writing performance showed some problems, mostly related to one-finger issues. Then, we conducted an elicitation study to assess what type of gestures blind and sighted people suggest. Sighted people suggested more complex gestures, compared with blind people. However, all the gestures tend to be physical, in-context, discrete and simple, and use only a single axis. We also found that a training based on the user’s gestures is better for recognition accuracy. Nevertheless, the input and output text components still require new approaches to improve users performance. Still, this wearable interface seems promising for simple actions that do not require cognitive load. Overall, our results suggest that we are on track to make possible blind people interact with mobile devices in daily life

    Understanding intuitive gestures in wearable mixed reality environments

    Get PDF
    Augmented and mixed reality experiences are increasingly accessible due to advances in technology in both professional and daily settings. Technology continues to evolve into multiple different forms, including tablet experiences in the form of augmented reality (AR) and mixed reality (MR) using wearable heads-up displays (HUDs). Currently, standards for best usability practices continue to evolve for MR HUD two-dimensional user interfaces (2D UI) and three-dimensional user interfaces (3D UI). Therefore, research on evolving usability practices will serve as guidance for future development of MR HUD applications. The objective of this dissertation is to understand what gestures users intuitively make to respond to a MR environment while wearing a HUD. The Microsoft HoloLens is a wearable HUD that can be used for MR. The Microsoft HoloLens contains two core gestures that were developed to interact with holographic interfaces in MR. Although current gestures can be learned to generate successful outcomes, this dissertation provides a better understanding of which gestures are intuitive to new users of a MR environment. To understand which gestures are intuitive to users, 74 participants without any experience with MR attempted to make gestures within a wearable MR HUD environment. The results of this study show that previous technology experience can influence gesture choice; however, gesture choice also depends on the goal of the interaction scenario. Results suggest that a greater number of programmed gestures are needed in order to best utilize all tools available in wearable HUDs in MR. Results of this dissertation suggest that five new gestures should be created, with three of these gestures serving to reflect a connection between MR interaction and current gesture-based technology. Additionally, results suggest that two new gestures should be created that reflect a connection between gestures for MR and daily movements in the physical world space

    최적의 터치 동작 설계를 위한 인간 성능 모형 개발

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 공과대학 산업공학과, 2017. 8. 윤명환.Touch interface has evolved into dominant interface system for smartphones over the last 10 years. This evolutionary process has been applicable not only to the smartphone, but also to small hand-held smart devices like portable game consoles and tablet devices. Even further, the most recent Microsoft Windows operating system supports both traditional point and click interface as well as touch interface for broader coverage of OS on digital devices. Identifying factors contributing the human performance on touch interface system has been studied by wide range of researchers globally. Designers and manufacturers of smart devices with touch interface system could benefit from the findings of these studies since they may provide opportunities to design and implement better performing and more usable product with competitive edge over competitors. In this study, we investigated factors affecting human performance on touch interface systems to establish practical design guidelines for designers and manufacturers of smart devices with touch interface system. The first group of factors is demography related variables such as gender, regions and age. The second group of factors is interaction related variables such as number of hands involved in interacting with touch system – one handed versus two handed postures. Finally and most importantly, design-related variables such as sizes, shapes or locations of touch targets are investigated. Our main goal of this study is to identify what are the most affecting factors to human performance of touch interface systems and establish mathematical modeling among them. Developed performance modeling will be leveraged to estimate expected human performance without conducting usability testing on given touch interface system. Once demography, interaction and design related variables are given, we will be able to propose expected performance level by inputting those variables into the established model, thus will contribute to the optimal design practice. Touch gestures considered in this study are tap touch, move touch and flick touch, which are the most widely used touch gestures in designing and implementing touch interface system. We have recruited 259 subjects from 4 major metropolitan areas across 3 different countries – New York, San Francisco, London and Paris and conducted controlled laboratory experiment. In order to assess human performance of each touch gesture, we have defined individual performance measures of each gesture such as task completion time, velocity, throughput introduced by Fitts law (Fitts, 1954), variance/accuracy ratio introduced by Chan & Childress (1990), accuracy or offset tendency from a desired line of target. By investigating these performance measures, we could come up with design guidelines about design specifications such as size and movement direction as well as qualitative insights on how touch gestures are different across all the factors we have gathered from the experimental setup. Design strategies and guidelines as well as human performance modeling will contribute to develop effective and efficient touch interface systems.Introduction 1 1.1Background 1 1.2Research questions 2 1.3Document Outline 3 Literature reviews 5 2.1Potential variables affecting touch interface 5 2.2Gestures used in touch interface design 6 2.3How people hold mobile devices 9 2.4Design for thumbs 13 2.5Touch target size guidelines 14 2.6Estimating touch sizes 17 2.7Human performance models 20 2.8Human performance by gender and age 27 2.9Thumb-based touch interaction 29 2.10Models of human motor control 32 Tap touch experiment 37 3.1Introduction 37 3.2Methods 40 3.2.1 Task design 40 3.2.2 Experimental design 41 3.2.3 Subjects 42 3.2.4 Data analysis method 43 3.3Results 45 3.3.1 Normality check 45 3.3.2 Variables affecting task completion time on tap touch 47 3.3.3 Variables affecting distance to target on tap touch 55 3.3.4 Variables affecting angle from positive x-axis to touch point on tap touch 62 3.3.5 Variables affecting speed accuracy ratio on tap touch 68 3.4Conclusion and discussion 75 3.4.1 Speed accuracy trade off 75 3.4.2 Implications on angle from X axis 78 3.4.3 Leveraging performance prediction models 79 3.4.4 Recommended design strategies 80 3.4.5 Tap target size recommendation 81 Move touch experiment 85 4.1Introduction 85 4.2Methods 88 4.2.1 Task design 88 4.2.2 Experimental design 89 4.2.3 Subjects 90 4.3Data analysis method 91 4.3.1 Data handling 92 4.3.2 Result 92 4.3.3 Normality check 92 4.3.4 Variables affecting task velocity on move touch 94 4.3.5 Variables affecting accuracy of initial touch on move touch 105 4.3.6 Variables affecting accuracy of final release on move touch 113 4.3.7 Variables affecting throughput on move touch 121 4.4Conclusion and discussion 130 4.4.1 Design strategy for one hand versus two hands 130 4.4.2 Design strategy on moving direction 131 4.4.3 Design strategy on object sizes 132 4.4.4 Leveraging performance prediction models 132 Flick touch experiment 135 5.1Introduction 135 5.2Method 137 5.2.1 Task design 137 5.2.2 Experimental design 137 5.3Data analysis method 139 5.3.1 Data handling 140 5.4Results 140 5.4.1 Normality check 140 5.4.2 Variables affecting task completion time on flick touch 142 5.4.3 Variables affecting travel distance on flick touch 148 5.4.4 Variables affecting angle on flick touch 154 5.4.5 Variables affecting offset Y on flick touch 159 5.5Conclusion and discussion 164 5.5.1 Design strategy on demography and interaction related variables for flick movement 165 5.5.2 Design strategy on design-related variables for flick movement 166 5.5.3 Leveraging performance prediction models 167 Conclusion 169 6.1Research goals 169 6.2Summary of findings 170 6.3Performance prediction models 172 6.4Limitations and future studies 172 Bibliography 175 Abstract (in Korean) 186Docto
    corecore