8 research outputs found

    A longitudinal review of Mobile HCI research Methods

    Get PDF
    This paper revisits a research methods survey from 2003 and contrasts it with a survey from 2010. The motivation is to gain insight about how mobile HCI research has evolved over the last decade in terms of approaches and focus. The paper classifies 144 publications from 2009 published in 10 prominent outlets by their research methods and purpose. Comparing this to the survey for 2000-02 show that mobile HCI research has changed methodologically. From being almost exclusively driven by engineering and applied research, current mobile HCI is primarily empirically driven, involves a high number of field studies, and focus on evaluating and understanding, as well as engineering. It has also become increasingly multi-methodological, combining and diversifying methods from different disciplines. At the same time, new opportunities and challenges have emerged

    How to Interact with Augmented Reality Head Mounted Devices in Care Work? A Study Comparing Handheld Touch (Hands-on) and Gesture (Hands-free) Interaction

    Get PDF
    In this paper, we investigate augmented reality (AR) to support caregivers. We implemented a system called Care Lenses that supported various care tasks on AR head-mounted devices. For its application, one question concerned how caregivers could interact with the system while providing care (i.e., while using one or both hands for care tasks). Therefore, we compared two mechanisms to interact with the Care Lenses (handheld touch similar to touchpads and touchscreens and head gestures). We found that head gestures were difficult to apply in practice, but except for that the head gesture support was as usable and useful as handheld touch interaction, although the study participants were much more familiar with the handheld touch control. We conclude that head gestures can be a good means to enable AR support in care, and we provide design considerations to make them more applicable in practice

    Multimodaalinen joustavuus mobiilissa tekstinsyöttötehtävässä

    Get PDF
    Mobiili käytettävyys riippuu informaation määrästä jonka käyttäjä pystyy tavoittamaan ja välittämään käyttöliittymän avulla liikkeellä ollessaan. Informaation siirtokapasiteetti ja onnistunut siirto taas riippuvat siitä, kuinka joustavasti käyttöliittymää voi käyttää erilaisissa mobiileissa käyttökonteksteissa. Multimodaalisen joustavuuden tutkimus on keskittynyt lähinnä modaliteettien hyödyntämistapoihin ja niiden integrointiin käyttöliittymiin. Useimmat evaluoivat tutkimukset multimodaalisen joustavuuden alueella mittaavat vuorovaikutusten vaikutuksia toisiinsa. Kuitenkin ongelmana on, että ensinnäkään käyttöliittymän suorituksen arviointi tietyssä kontekstissa ei yleisty muihin mahdollisiin konteksteihin, ja toiseksi, suorituksen vertaaminen tilanteeseen jossa kahta tehtävää suoritetaan samanaikaisesti, paljastaa ennemminkin tehtävien välillä vallitsevan tasapainoilun, kuin itse vuorovaikutusten vaikutukset. Vastatakseen näihin ongelmiin multimodaalisen joustavuuden mittaamisessa, tämä diplomityö eristää modaliteettien hyödyntämisen vaikutuksen vuorovaikutuksessa mobiilin käyttöliittymän kanssa. Samanaikaisten, toissijaisten tehtävien sijaan modaliteettien hyödyntämisen mahdollisuus suljetaan kokonaan vuorovaikutuksesta. Multimodaalisen joustavuuden arvioinnin metodia [1] käytettiin tutkimuksessa osoittamaan kolmen aistikanavan (näön, kuulon ja tunnon) käyttöasteita mobiilissa tekstinsyöttötehtävässä kolmella laitteella; ITU-12 näppäimistöllä, sekä fyysisellä ja kosketusnäytöllisellä Qwerty -näppäimistöllä. Työn tavoitteena oli määrittää näiden käyttöliittymien multimodaalinen joustavuus ja yksittäisten aistikanavien arvo vuorovaikutukselle, sekä tutkia aistien yhteistoimintaa tekstinsyöttötehtävässä. Tutkimuksen tulokset osoittavat, että huolimatta ITU-12 näppäimistön hitaudesta kirjoittaa häiriöttömässä tilassa, sillä on ylivertainen mukautumiskyky toimia erilaisten häiriöiden vaikuttaessa, kuten oikeissa mobiileissa konteksteissa. Kaikki käyttöliittymät todettiin hyvin riippuvaisiksi näöstä. Qwerty -näppäimistöjen suoriutuminen heikkeni yli 80% kun näkö suljettiin vuorovaikutukselta. ITU-12 oli vähiten riippuvainen näöstä, suorituksen heiketessä noin 50 %. Aistikanavien toiminnan tarkastelu tekstinsyöttötehtävässä vihjaa, että näkö ja tunto toimivat yhdessä lisäten suorituskykyä jopa enemmän kuin käytettynä erikseen. Auraalinen palaute sen sijaan ei näyttänyt tuovan lisäarvoa vuorovaikutukseen lainkaan.The mobile usability of an interface depends on the amount of information a user is able to retrieve or transmit while on the move. Furthermore, the information transmission capacity and successful transmissions depend on how flexibly usable the interface is across varying real world contexts. Major focus in research of multimodal flexibility has been on facilitation of modalities to the interface. Most evaluative studies have measured effects that the interactions cause to each other. However, assessing these effects under a limited number of conditions does not generalize to other possible conditions in the real world. Moreover, studies have often compared single-task conditions to dual-tasking, measuring the trade-off between the tasks, not the actual effects the interactions cause. To contribute to the paradigm of measuring multimodal flexibility, this thesis isolates the effect of modality utilization in the interaction with the interface; instead of using a secondary task, modalities are withdrawn from the interaction. The multimodal flexibility method [1] was applied in this study to assess the utilization of three sensory modalities (vision, audition and tactition) in a text input task with three mobile interfaces; a 12-digit keypad, a physical Qwerty-keyboard and a touch screen virtual Qwerty-keyboard. The goal of the study was to compare multimodal flexibility of these interfaces, assess the values of utilized sensory modalities to the interaction, and examine the cooperation of modalities in a text input task. The results imply that the alphabetical 12-digit keypad is the multimodally most flexible of the three compared interfaces. Although the 12-digit keypad is relatively inefficient to type when all modalities are free to be allocated to the interaction, it is the most flexible in performing under constraints that the real world might set on sensory modalities. In addition, all the interfaces are shown to be highly dependent on vision. The performance of both Qwerty-keyboards dropped by approximately 80% as a result of withdrawing the vision from the interaction, and the performance of ITU-12 suffered approximately 50%. Examining cooperation of the modalities in the text input task, vision was shown to work in synergy with tactition, but audition did not provide any extra value for the interaction

    Assisting Navigation and Object Selection with Vibrotactile Cues

    Get PDF
    Our lives have been drastically altered by information technology in the last decades, leading to evolutionary mismatches between human traits and the modern environment. One particular mismatch occurs when visually demanding information technology overloads the perceptual, cognitive or motor capabilities of the human nervous system. This information overload could be partly alleviated by complementing visual interaction with haptics. The primary aim of this thesis was to investigate how to assist movement control with vibrotactile cues. Vibrotactile cues refer to technologymediated vibrotactile signals that notify users of perceptual events, propose users to make decisions, and give users feedback from actions. To explore vibrotactile cues, we carried out five experiments in two contexts of movement control: navigation and object selection. The goal was to find ways to reduce information load in these tasks, thus helping users to accomplish the tasks more effectively. We employed measurements such as reaction times, error rates, and task completion times. We also used subjective rating scales, short interviews, and free-form participant comments to assess the vibrotactile assisted interactive systems. The findings of this thesis can be summarized as follows. First, if the context of movement control allows the use of both feedback and feedforward cues, feedback cues are a reasonable first option. Second, when using vibrotactile feedforward cues, using low-level abstractions and supporting the interaction with other modalities can keep the information load as low as possible. Third, the temple area is a feasible actuation location for vibrotactile cues in movement control, including navigation cues and object selection cues with head turns. However, the usability of the area depends on contextual factors such as spatial congruency, the actuation device, and the pace of the interaction task

    The effects of encumbrance and mobility on interactions with touchscreen mobile devices

    Get PDF
    Mobile handheld devices such as smartphones are now convenient as they allow users to make calls, reply to emails, find nearby services and many more. The increase in functionality and availability of mobile applications also allow mobile devices to be used in many different everyday situations (for example, while on the move and carrying items). While previous work has investigated the interaction difficulties in walking situations, there is a lack of empirical work in the literature on mobile input when users are physically constrained by other activities. As a result, how users input on touchscreen handheld devices in encumbered and mobile contexts is less well known and deserves more attention to examine the usability issues that are often ignored. This thesis investigates targeting performance on touchscreen mobile phones in one common encumbered situation - when users are carrying everyday objects while on the move. To identify the typical objects held during mobile interactions and define a set of common encumbrance scenarios to evaluate in subsequent user studies, Chapter 3 describes an observational study that examined users in different public locations. The results showed that people carried different types of bags and boxes the most frequently. To measure how much tapping performance on touchscreen mobile phones is affected, Chapter 4 examines a range of encumbrance scenarios, which includes holding a bag in-hand or a box underarm, either on the dominant or non-dominant side, during target selections on a mobile phone. Users are likely to switch to a more effective input posture when encumbered and on the move, so Chapter 5 investigates one- and two- handed encumbered interactions and evaluates situations where both hands are occupied with multiple objects. Touchscreen devices afford various multi-touch input types, so Chapter 6 compares the performance of four main one- and two- finger gesture inputs: tapping, dragging, spreading & pinching and rotating, while walking and encumbered. Several main evaluation approaches have been used in previous walking studies, but more attention is required when the effects of encumbrance is also being examined. Chapter 7 examines the appropriateness of two methods (ground and treadmill walking) for encumbered and walking studies, justifies the need to control walking speed and examines the effects of varying walking speed (i.e. walking slower or faster than normal) on encumbered targeting performance. The studies all showed a reduction in targeting performance when users were walking and encumbered, so Chapter 8 explores two ways to improve target selections. The first approach defines a target size, based on the results collected from earlier studies, to increase tapping accuracy and subsequently, a novel interface arrangement was designed which optimises screen space more effectively. The second approach evaluates a benchmark pointing technique, which has shown to improve the selection of small targets, to see if it is useful in walking and encumbered contexts

    Using pressure input and thermal feedback to broaden haptic interaction with mobile devices

    Get PDF
    Pressure input and thermal feedback are two under-researched aspects of touch in mobile human-computer interfaces. Pressure input could provide a wide, expressive range of continuous input for mobile devices. Thermal stimulation could provide an alternative means of conveying information non-visually. This thesis research investigated 1) how accurate pressure-based input on mobile devices could be when the user was walking and provided with only audio feedback and 2) what forms of thermal stimulation are both salient and comfortable and so could be used to design structured thermal feedback for conveying multi-dimensional information. The first experiment tested control of pressure on a mobile device when sitting and using audio feedback. Targeting accuracy was >= 85% when maintaining 4-6 levels of pressure across 3.5 Newtons, using only audio feedback and a Dwell selection technique. Two further experiments tested control of pressure-based input when walking and found accuracy was very high (>= 97%) even when walking and using only audio feedback, when using a rate-based input method. A fourth experiment tested how well each digit of one hand could apply pressure to a mobile phone individually and in combination with others. Each digit could apply pressure highly accurately, but not equally so, while some performed better in combination than alone. 2- or 3-digit combinations were more precise than 4- or 5-digit combinations. Experiment 5 compared one-handed, multi-digit pressure input using all 5 digits to traditional two-handed multitouch gestures for a combined zooming and rotating map task. Results showed comparable performance, with multitouch being ~1% more accurate but pressure input being ~0.5sec faster, overall. Two experiments, one when sitting indoors and one when walking indoors tested how salient and subjectively comfortable/intense various forms of thermal stimulation were. Faster or larger changes were more salient, faster to detect and less comfortable and cold changes were more salient and faster to detect than warm changes. The two final studies designed two-dimensional structured ‘thermal icons’ that could convey two pieces of information. When indoors, icons were correctly identified with 83% accuracy. When outdoors, accuracy dropped to 69% when sitting and 61% when walking. This thesis provides the first detailed study of how precisely pressure can be applied to mobile devices when walking and provided with audio feedback and the first systematic study of how to design thermal feedback for interaction with mobile devices in mobile environments

    Enhanced sensor-based interaction techniques for mobile map-based applications

    Get PDF
    Mobile phones are increasingly being equipped with a wide range of sensors which enable a variety of interaction techniques. Sensor-based interaction techniques are particularly promising for domains such as map-based applications, where the user is required to interact with a large information space on the small screen of a mobile phone. Traditional interaction techniques have several shortcomings for interacting with mobile map-based applications. Keypad interaction offers limited control over panning speed and direction. Touch-screen interaction is often a two-handed form of interaction and results in the display being occluded during interaction. Sensor-based interaction provides the potential to address many of these shortcomings, but currently suffers from several limitations. The aim of this research was to propose enhancements to address the shortcomings of sensor-based interaction, with a particular focus on tilt interaction. A comparative study between tilt and keypad interaction was conducted using a prototype mobile map-based application. This user study was conducted in order to identify shortcomings and opportunities for improving tilt interaction techniques in this domain. Several shortcomings, including controllability, mental demand and practicality concerns were highlighted. Several enhanced tilt interaction techniques were proposed to address these shortcomings. These techniques were the use of visual and vibrotactile feedback, attractors, gesture zooming, sensitivity adaptation and dwell-time selection. The results of a comparative user study showed that the proposed techniques achieved several improvements in terms of the problem areas identified earlier. The use of sensor fusion for tilt interaction was compared to an accelerometer-only approach which has been widely applied in existing research. This evaluation was motivated by advances in mobile sensor technology which have led to the widespread adoption of digital compass and gyroscope sensors. The results of a comparative user study between sensor fusion and accelerometer-only implementations of tilt interaction showed several advantages for the use of sensor fusion, particularly in a walking context of use. Modifications to sensitivity adaptation and the use of tilt to perform zooming were also investigated. These modifications were designed to address controllability shortcomings identified in earlier experimental work. The results of a comparison between tilt zooming and Summary gesture zooming indicated that tilt zooming offered better results, both in terms of performance and subjective user ratings. Modifications to the original sensitivity adaptation algorithm were only partly successful. Greater accuracy improvements were achieved for walking tasks, but the use of dynamic dampening factors was found to be confusing. The results of this research were used to propose a framework for mobile tilt interaction. This framework provides an overview of the tilt interaction process and highlights how the enhanced techniques proposed in this research can be integrated into the design of tilt interaction techniques. The framework also proposes an application architecture which was implemented as an Application Programming Interface (API). This API was successfully used in the development of two prototype mobile applications incorporating tilt interaction
    corecore