348 research outputs found

    최적의 터치 동작 설계를 위한 인간 성능 모형 개발

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 공과대학 산업공학과, 2017. 8. 윤명환.Touch interface has evolved into dominant interface system for smartphones over the last 10 years. This evolutionary process has been applicable not only to the smartphone, but also to small hand-held smart devices like portable game consoles and tablet devices. Even further, the most recent Microsoft Windows operating system supports both traditional point and click interface as well as touch interface for broader coverage of OS on digital devices. Identifying factors contributing the human performance on touch interface system has been studied by wide range of researchers globally. Designers and manufacturers of smart devices with touch interface system could benefit from the findings of these studies since they may provide opportunities to design and implement better performing and more usable product with competitive edge over competitors. In this study, we investigated factors affecting human performance on touch interface systems to establish practical design guidelines for designers and manufacturers of smart devices with touch interface system. The first group of factors is demography related variables such as gender, regions and age. The second group of factors is interaction related variables such as number of hands involved in interacting with touch system – one handed versus two handed postures. Finally and most importantly, design-related variables such as sizes, shapes or locations of touch targets are investigated. Our main goal of this study is to identify what are the most affecting factors to human performance of touch interface systems and establish mathematical modeling among them. Developed performance modeling will be leveraged to estimate expected human performance without conducting usability testing on given touch interface system. Once demography, interaction and design related variables are given, we will be able to propose expected performance level by inputting those variables into the established model, thus will contribute to the optimal design practice. Touch gestures considered in this study are tap touch, move touch and flick touch, which are the most widely used touch gestures in designing and implementing touch interface system. We have recruited 259 subjects from 4 major metropolitan areas across 3 different countries – New York, San Francisco, London and Paris and conducted controlled laboratory experiment. In order to assess human performance of each touch gesture, we have defined individual performance measures of each gesture such as task completion time, velocity, throughput introduced by Fitts law (Fitts, 1954), variance/accuracy ratio introduced by Chan & Childress (1990), accuracy or offset tendency from a desired line of target. By investigating these performance measures, we could come up with design guidelines about design specifications such as size and movement direction as well as qualitative insights on how touch gestures are different across all the factors we have gathered from the experimental setup. Design strategies and guidelines as well as human performance modeling will contribute to develop effective and efficient touch interface systems.Introduction 1 1.1Background 1 1.2Research questions 2 1.3Document Outline 3 Literature reviews 5 2.1Potential variables affecting touch interface 5 2.2Gestures used in touch interface design 6 2.3How people hold mobile devices 9 2.4Design for thumbs 13 2.5Touch target size guidelines 14 2.6Estimating touch sizes 17 2.7Human performance models 20 2.8Human performance by gender and age 27 2.9Thumb-based touch interaction 29 2.10Models of human motor control 32 Tap touch experiment 37 3.1Introduction 37 3.2Methods 40 3.2.1 Task design 40 3.2.2 Experimental design 41 3.2.3 Subjects 42 3.2.4 Data analysis method 43 3.3Results 45 3.3.1 Normality check 45 3.3.2 Variables affecting task completion time on tap touch 47 3.3.3 Variables affecting distance to target on tap touch 55 3.3.4 Variables affecting angle from positive x-axis to touch point on tap touch 62 3.3.5 Variables affecting speed accuracy ratio on tap touch 68 3.4Conclusion and discussion 75 3.4.1 Speed accuracy trade off 75 3.4.2 Implications on angle from X axis 78 3.4.3 Leveraging performance prediction models 79 3.4.4 Recommended design strategies 80 3.4.5 Tap target size recommendation 81 Move touch experiment 85 4.1Introduction 85 4.2Methods 88 4.2.1 Task design 88 4.2.2 Experimental design 89 4.2.3 Subjects 90 4.3Data analysis method 91 4.3.1 Data handling 92 4.3.2 Result 92 4.3.3 Normality check 92 4.3.4 Variables affecting task velocity on move touch 94 4.3.5 Variables affecting accuracy of initial touch on move touch 105 4.3.6 Variables affecting accuracy of final release on move touch 113 4.3.7 Variables affecting throughput on move touch 121 4.4Conclusion and discussion 130 4.4.1 Design strategy for one hand versus two hands 130 4.4.2 Design strategy on moving direction 131 4.4.3 Design strategy on object sizes 132 4.4.4 Leveraging performance prediction models 132 Flick touch experiment 135 5.1Introduction 135 5.2Method 137 5.2.1 Task design 137 5.2.2 Experimental design 137 5.3Data analysis method 139 5.3.1 Data handling 140 5.4Results 140 5.4.1 Normality check 140 5.4.2 Variables affecting task completion time on flick touch 142 5.4.3 Variables affecting travel distance on flick touch 148 5.4.4 Variables affecting angle on flick touch 154 5.4.5 Variables affecting offset Y on flick touch 159 5.5Conclusion and discussion 164 5.5.1 Design strategy on demography and interaction related variables for flick movement 165 5.5.2 Design strategy on design-related variables for flick movement 166 5.5.3 Leveraging performance prediction models 167 Conclusion 169 6.1Research goals 169 6.2Summary of findings 170 6.3Performance prediction models 172 6.4Limitations and future studies 172 Bibliography 175 Abstract (in Korean) 186Docto

    Using a smart device:the roles of mobile application usage on toddlers and pre-schoolers

    Get PDF
    Abstract. Technology has found its way into the hands of young children propelling various researchers to carry out a series of research on the impact which the usage is having on the young children. In the available studies concerning the young children on the effect of technology and mobile application, most researchers have focused either attention on the advantages or the disadvantages. This research work is not focusing on the advantages and disadvantages but instead looking through the two research questions, which are 1. What role do the use of smart devices and mobile apps play in the lives of toddlers and preschoolers? 2. Should toddlers and preschoolers be allowed to use a smart device? to address the role of mobile technology and its’ application on young children. The research methodology used in this work is the qualitative research methods implemented by carrying out a survey. The survey is used to carry out data collection from 20 parents in-order to create a framework for the basis of the result driven. The findings made confirms that smart device and the usage of mobile apps is playing a big role on the young children, but parents are obligated to provide protection and control to mitigate the negative effects could bring on the users. Mobile application for young children has come to stay and will continue to grow; parents should take note on how their kids use the device and for what purpose it used. Likewise, app developers should develop a more suitable app for young children. Also, regulatory bodies in charge of applications developed for young children should regulate what sort of advert pops-up in applications developed for young childre

    Cruiser and PhoTable: Exploring Tabletop User Interface Software for Digital Photograph Sharing and Story Capture

    Get PDF
    Digital photography has not only changed the nature of photography and the photographic process, but also the manner in which we share photographs and tell stories about them. Some traditional methods, such as the family photo album or passing around piles of recently developed snapshots, are lost to us without requiring the digital photos to be printed. The current, purely digital, methods of sharing do not provide the same experience as printed photographs, and they do not provide effective face-to-face social interaction around photographs, as experienced during storytelling. Research has found that people are often dissatisfied with sharing photographs in digital form. The recent emergence of the tabletop interface as a viable multi-user direct-touch interactive large horizontal display has provided the hardware that has the potential to improve our collocated activities such as digital photograph sharing. However, while some software to communicate with various tabletop hardware technologies exists, software aspects of tabletop user interfaces are still at an early stage and require careful consideration in order to provide an effective, multi-user immersive interface that arbitrates the social interaction between users, without the necessary computer-human interaction interfering with the social dialogue. This thesis presents PhoTable, a social interface allowing people to effectively share, and tell stories about, recently taken, unsorted digital photographs around an interactive tabletop. In addition, the computer-arbitrated digital interaction allows PhoTable to capture the stories told, and associate them as audio metadata to the appropriate photographs. By leveraging the tabletop interface and providing a highly usable and natural interaction we can enable users to become immersed in their social interaction, telling stories about their photographs, and allow the computer interaction to occur as a side-effect of the social interaction. Correlating the computer interaction with the corresponding audio allows PhoTable to annotate an automatically created digital photo album with audible stories, which may then be archived. These stories remain useful for future sharing -- both collocated sharing and remote (e.g. via the Internet) -- and also provide a personal memento both of the event depicted in the photograph (e.g. as a reminder) and of the enjoyable photo sharing experience at the tabletop. To provide the necessary software to realise an interface such as PhoTable, this thesis explored the development of Cruiser: an efficient, extensible and reusable software framework for developing tabletop applications. Cruiser contributes a set of programming libraries and the necessary application framework to facilitate the rapid and highly flexible development of new tabletop applications. It uses a plugin architecture that encourages code reuse, stability and easy experimentation, and leverages the dedicated computer graphics hardware and multi-core processors of modern consumer-level systems to provide a responsive and immersive interactive tabletop user interface that is agnostic to the tabletop hardware and operating platform, using efficient, native cross-platform code. Cruiser's flexibility has allowed a variety of novel interactive tabletop applications to be explored by other researchers using the framework, in addition to PhoTable. To evaluate Cruiser and PhoTable, this thesis follows recommended practices for systems evaluation. The design rationale is framed within the above scenario and vision which we explore further, and the resulting design is critically analysed based on user studies, heuristic evaluation and a reflection on how it evolved over time. The effectiveness of Cruiser was evaluated in terms of its ability to realise PhoTable, use of it by others to explore many new tabletop applications, and an analysis of performance and resource usage. Usability, learnability and effectiveness of PhoTable was assessed on three levels: careful usability evaluations of elements of the interface; informal observations of usability when Cruiser was available to the public in several exhibitions and demonstrations; and a final evaluation of PhoTable in use for storytelling, where this had the side effect of creating a digital photo album, consisting of the photographs users interacted with on the table and associated audio annotations which PhoTable automatically extracted from the interaction. We conclude that our approach to design has resulted in an effective framework for creating new tabletop interfaces. The parallel goal of exploring the potential for tabletop interaction as a new way to share digital photographs was realised in PhoTable. It is able to support the envisaged goal of an effective interface for telling stories about one's photos. As a serendipitous side-effect, PhoTable was effective in the automatic capture of the stories about individual photographs for future reminiscence and sharing. This work provides foundations for future work in creating new ways to interact at a tabletop and to the ways to capture personal stories around digital photographs for sharing and long-term preservation

    Framing Movements for Gesture Interface Design

    Get PDF
    Gesture interfaces are an attractive avenue for human-computer interaction, given the range of expression that people are able to engage when gesturing. Consequently, there is a long running stream of research into gesture as a means of interaction in the field of human-computer interaction. However, most of this research has focussed on the technical challenges of detecting and responding to people’s movements, or on exploring the interaction possibilities opened up by technical developments. There has been relatively little research on how to actually design gesture interfaces, or on the kinds of understandings of gesture that might be most useful to gesture interface designers. Running parallel to research in gesture interfaces, there is a body of research into human gesture, which would seem a useful source to draw knowledge that could inform gesture interface design. However, there is a gap between the ways that ‘gesture’ is conceived of in gesture interface research compared to gesture research. In this dissertation, I explore this gap and reflect on the appropriateness of existing research into human gesturing for the needs of gesture interface design. Through a participatory design process, I designed, prototyped and evaluated a gesture interface for the work of the dental examination. Against this grounding experience, I undertook an analysis of the work of the dental examination with particular focus on the roles that gestures play in the work to compare and discuss existing gesture research. I take the work of the gesture researcher McNeill as a point of focus, because he is widely cited within gesture interface research literature. I show that although McNeill’s research into human gesture can be applied to some important aspects of the gestures of dentistry, there remain range of gestures that McNeill’s work does not deal with directly, yet which play an important role in the work and could usefully be responded to with gesture interface technologies. I discuss some other strands of gesture research, which are less widely cited within gesture interface research, but offer a broader conception of gesture that would be useful for gesture interface design. Ultimately, I argue that the gap in conceptions of gesture between gesture interface research and gesture research is an outcome of the different interests that each community brings to bear on the research. What gesture interface research requires is attention to the problems of designing gesture interfaces for authentic context of use and assessment of existing theory in light of this

    An Exploration of Multi-touch Interaction Techniques

    Get PDF
    Research in multi-touch interaction has typically been focused on direct spatial manipulation; techniques have been created to result in the most intuitive mapping between the movement of the hand and the resultant change in the virtual object. As we attempt to design for more complex operations, the effectiveness of spatial manipulation as a metaphor becomes weak. We introduce two new platforms for multi-touch computing: a gesture recognition system, and a new interaction technique. I present Multi-Tap Sliders, a new interaction technique for operation in what we call non-spatial parametric spaces. Such spaces do not have an obvious literal spatial representation, (Eg.: exposure, brightness, contrast and saturation for image editing). The multi-tap sliders encourage the user to keep her visual focus on the tar- get, instead of requiring her to look back at the interface. My research emphasizes ergonomics, clear visual design, and fluid transition between modes of operation. Through a series of iterations, I develop a new technique for quickly selecting and adjusting multiple numerical parameters. Evaluations of multi-tap sliders show improvements over traditional sliders. To facilitate further research on multi-touch gestural interaction, I developed mGestr: a training and recognition system using hidden Markov models for designing a multi-touch gesture set. Our evaluation shows successful recognition rates of up to 95%. The recognition framework is packaged into a service for easy integration with existing applications

    Interacting "Through the Display"

    Get PDF
    The increasing availability of displays at lower costs has led to a proliferation of such in our everyday lives. Additionally, mobile devices are ready to hand and have been proposed as interaction devices for external screens. However, only their input mechanism was taken into account without considering three additional factors in environments hosting several displays: first, a connection needs to be established to the desired target display (modality). Second, screens in the environment may be re-arranged (flexibility). And third, displays may be out of the user’s reach (distance). In our research we aim to overcome the problems resulting from these characteristics. The overall goal is a new interaction model that allows for (1) a non-modal connection mechanism for impromptu use on various displays in the environment, (2) interaction on and across displays in highly flexible environments, and (3) interacting at variable distances. In this work we propose a new interaction model called through the display interaction which enables users to interact with remote content on their personal device in an absolute and direct fashion. To gain a better understanding of the effects of the additional characteristics, we implemented two prototypes each of which investigates a different distance to the target display: LucidDisplay allows users to place their mobile device directly on top of a larger external screen. MobileVue on the other hand enables users to interact with an external screen at a distance. In each of these prototypes we analyzed their effects on the remaining two criteria – namely the modality of the connection mechanism as well as the flexibility of the environment. With the findings gained in this initial phase we designed Shoot & Copy, a system that allows the detection of screens purely based on their visual content. Users aim their personal device’s camera at the target display which then appears in live video shown in the viewfinder. To select an item, users take a picture which is analyzed to determine the targeted region. We further extended this approach to multiple displays by using a centralized component serving as gateway to the display environment. In Tap & Drop we refined this prototype to support real-time feedback. Instead of taking pictures, users can now aim their mobile device at the display resulting and start interacting immediately. In doing so, we broke the rigid sequential interaction of content selection and content manipulation. Both prototypes allow for (1) connections in a non-modal way (i.e., aim at the display and start interacting with it) from the user’s point of view and (2) fully flexible environments (i.e., the mobile device tracks itself with respect to displays in the environment). However, the wide-angle lenses and thus greater field of views of current mobile devices still do not allow for variable distances. In Touch Projector, we overcome this limitation by introducing zooming in combination with temporarily freezing the video image. Based on our extensions to taxonomy of mobile device interaction on external displays, we created a refined model of interacting through the display for mobile use. It enables users to interact impromptu without explicitly establishing a connection to the target display (non-modal). As the mobile device tracks itself with respect to displays in the environment, the model further allows for full flexibility of the environment (i.e., displays can be re-arranged without affecting on the interaction). And above all, users can interact with external displays regardless of their actual size at variable distances without any loss of accuracy.Die steigende Verfügbarkeit von Bildschirmen hat zu deren Verbreitung in unserem Alltag geführt. Ferner sind mobile Geräte immer griffbereit und wurden bereits als Interaktionsgeräte für zusätzliche Bildschirme vorgeschlagen. Es wurden jedoch nur Eingabemechanismen berücksichtigt ohne näher auf drei weitere Faktoren in Umgebungen mit mehreren Bildschirmen einzugehen: (1) Beide Geräte müssen verbunden werden (Modalität). (2) Bildschirme können in solchen Umgebungen umgeordnet werden (Flexibilität). (3) Monitore können außer Reichweite sein (Distanz). Wir streben an, die Probleme, die durch diese Eigenschaften auftreten, zu lösen. Das übergeordnete Ziel ist ein Interaktionsmodell, das einen nicht-modalen Verbindungsaufbau für spontane Verwendung von Bildschirmen in solchen Umgebungen, (2) Interaktion auf und zwischen Bildschirmen in flexiblen Umgebungen, und (3) Interaktionen in variablen Distanzen erlaubt. Wir stellen ein Modell (Interaktion durch den Bildschirm) vor, mit dem Benutzer mit entfernten Inhalten in direkter und absoluter Weise auf ihrem Mobilgerät interagieren können. Um die Effekte der hinzugefügten Charakteristiken besser zu verstehen, haben wir zwei Prototypen für unterschiedliche Distanzen implementiert: LucidDisplay erlaubt Benutzern ihr mobiles Gerät auf einen größeren, sekundären Bildschirm zu legen. Gegensätzlich dazu ermöglicht MobileVue die Interaktion mit einem zusätzlichen Monitor in einer gewissen Entfernung. In beiden Prototypen haben wir dann die Effekte der verbleibenden zwei Kriterien (d.h. Modalität des Verbindungsaufbaus und Flexibilität der Umgebung) analysiert. Mit den in dieser ersten Phase erhaltenen Ergebnissen haben wir Shoot & Copy entworfen. Dieser Prototyp erlaubt die Erkennung von Bildschirmen einzig über deren visuellen Inhalt. Benutzer zeigen mit der Kamera ihres Mobilgeräts auf einen Bildschirm dessen Inhalt dann in Form von Video im Sucher dargestellt wird. Durch die Aufnahme eines Bildes (und der darauf folgenden Analyse) wird Inhalt ausgewählt. Wir haben dieses Konzept zudem auf mehrere Bildschirme erweitert, indem wir eine zentrale Instanz verwendet haben, die als Schnittstelle zur Umgebung agiert. Mit Tap & Drop haben wir den Prototyp verfeinert, um Echtzeit-Feedback zu ermöglichen. Anstelle der Bildaufnahme können Benutzer nun ihr mobiles Gerät auf den Bildschirm richten und sofort interagieren. Dadurch haben wir die strikt sequentielle Interaktion (Inhalt auswählen und Inhalt manipulieren) aufgebrochen. Beide Prototypen erlauben bereits nicht-modale Verbindungsmechanismen in flexiblen Umgebungen. Die in heutigen Mobilgeräten verwendeten Weitwinkel-Objektive erlauben jedoch nach wie vor keine variablen Distanzen. Mit Touch Projector beseitigen wir diese Einschränkung, indem wir Zoomen in Kombination mit einer vorübergehenden Pausierung des Videos im Sucher einfügen. Basierend auf den Erweiterungen der Klassifizierung von Interaktionen mit zusätzlichen Bildschirmen durch mobile Geräte haben wir ein verbessertes Modell (Interaktion durch den Bildschirm) erstellt. Es erlaubt Benutzern spontan zu interagieren, ohne explizit eine Verbindung zum zweiten Bildschirm herstellen zu müssen (nicht-modal). Da das mobile Gerät seinen räumlichen Bezug zu allen Bildschirmen selbst bestimmt, erlaubt unser Modell zusätzlich volle Flexibilität in solchen Umgebungen. Darüber hinaus können Benutzer mit zusätzlichen Bildschirmen (unabhängig von deren Größe) in variablen Entfernungen interagieren

    Using natural user interfaces to support synchronous distributed collaborative work

    Get PDF
    Synchronous Distributed Collaborative Work (SDCW) occurs when group members work together at the same time from different places together to achieve a common goal. Effective SDCW requires good communication, continuous coordination and shared information among group members. SDCW is possible because of groupware, a class of computer software systems that supports group work. Shared-workspace groupware systems are systems that provide a common workspace that aims to replicate aspects of a physical workspace that is shared among group members in a co-located environment. Shared-workspace groupware systems have failed to provide the same degree of coordination and awareness among distributed group members that exists in co-located groups owing to unintuitive interaction techniques that these systems have incorporated. Natural User Interfaces (NUIs) focus on reusing natural human abilities such as touch, speech, gestures and proximity awareness to allow intuitive human-computer interaction. These interaction techniques could provide solutions to the existing issues of groupware systems by breaking down the barrier between people and technology created by the interaction techniques currently utilised. The aim of this research was to investigate how NUI interaction techniques could be used to effectively support SDCW. An architecture for such a shared-workspace groupware system was proposed and a prototype, called GroupAware, was designed and developed based on this architecture. GroupAware allows multiple users from distributed locations to simultaneously view and annotate text documents, and create graphic designs in a shared workspace. Documents are represented as visual objects that can be manipulated through touch gestures. Group coordination and awareness is maintained through document updates via immediate workspace synchronization, user action tracking via user labels and user availability identification via basic proxemic interaction. Members can effectively communicate via audio and video conferencing. A user study was conducted to evaluate GroupAware and determine whether NUI interaction techniques effectively supported SDCW. Ten groups of three members each participated in the study. High levels of performance, user satisfaction and collaboration demonstrated that GroupAware was an effective groupware system that was easy to learn and use, and effectively supported group work in terms of communication, coordination and information sharing. Participants gave highly positive comments about the system that further supported the results. The successful implementation of GroupAware and the positive results obtained from the user evaluation provides evidence that NUI interaction techniques can effectively support SDCW

    Creating mobile gesture-based interaction design patterns for older adults : a study of tap and swipe gestures with portuguese seniors

    Get PDF
    Tese de mestrado. Multimédia. Faculdade de Engenharia. Universidade do Porto. 201
    corecore