66 research outputs found

    A multi-touch interface for multi-robot path planning and control

    Get PDF
    In the last few years, research in human-robot interaction has moved beyond the issues concerning the design of the interaction between a person and a single robot. Today many researchers have shifted their focus toward the problem of how humans can control a multi-robot team. The rising of multi-touch devices provides a new range of opportunities in this sense. Our research seeks to discover new insights and guidelines for the design of multi-touch interfaces for the control of biologically inspired multi-robot teams. We have developed an iPad touch interface that lets users exert partial control over a set of autonomous robots. The interface also serves as an experimental platform to study how human operators design multi-robot motion in a pursuit-evasion setting

    GART: The Gesture and Activity Recognition Toolkit

    Get PDF
    Presented at the 12th International Conference on Human-Computer Interaction, Beijing, China, July 2007.The original publication is available at www.springerlink.comThe Gesture and Activity Recognition Toolit (GART) is a user interface toolkit designed to enable the development of gesture-based applications. GART provides an abstraction to machine learning algorithms suitable for modeling and recognizing different types of gestures. The toolkit also provides support for the data collection and the training process. In this paper, we present GART and its machine learning abstractions. Furthermore, we detail the components of the toolkit and present two example gesture recognition applications

    A new method for interacting with multi-window applications on large, high resolution displays

    Get PDF
    Physically large display walls can now be constructed using off-the-shelf computer hardware. The high resolution of these displays (e.g., 50 million pixels) means that a large quantity of data can be presented to users, so the displays are well suited to visualization applications. However, current methods of interacting with display walls are somewhat time consuming. We have analyzed how users solve real visualization problems using three desktop applications (XmdvTool, Iris Explorer and Arc View), and used a new taxonomy to classify users’ actions and illustrate the deficiencies of current display wall interaction methods. Following this we designed a novel methodfor interacting with display walls, which aims to let users interact as quickly as when a visualization application is used on a desktop system. Informal feedback gathered from our working prototype shows that interaction is both fast and fluid

    Augmenting the Spatial Perception Capabilities of Users Who Are Blind

    Get PDF
    People who are blind face a series of challenges and limitations resulting from their lack of being able to see, forcing them to either seek the assistance of a sighted individual or work around the challenge by way of a inefficient adaptation (e.g. following the walls in a room in order to reach a door rather than walking in a straight line to the door). These challenges are directly related to blind users' lack of the spatial perception capabilities normally provided by the human vision system. In order to overcome these spatial perception related challenges, modern technologies can be used to convey spatial perception data through sensory substitution interfaces. This work is the culmination of several projects which address varying spatial perception problems for blind users. First we consider the development of non-visual natural user interfaces for interacting with large displays. This work explores the haptic interaction space in order to find useful and efficient haptic encodings for the spatial layout of items on large displays. Multiple interaction techniques are presented which build on prior research (Folmer et al. 2012), and the efficiency and usability of the most efficient of these encodings is evaluated with blind children. Next we evaluate the use of wearable technology in aiding navigation of blind individuals through large open spaces lacking tactile landmarks used during traditional white cane navigation. We explore the design of a computer vision application with an unobtrusive aural interface to minimize veering of the user while crossing a large open space. Together, these projects represent an exploration into the use of modern technology in augmenting the spatial perception capabilities of blind users

    E-Pad: Large Display Pointing in a Continuous Interaction Space around a Mobile Device

    Get PDF
    International audienceRelative pointing through using tactile mobile device (such as tablets of phones) on a large display is a viable interaction technique (that we call Pad in this paper) which permits accurate pointing. However, limited device size has consequences on interaction. Such systems are known to often require clutching, which degrades performances. We present E-Pad, an indirect relative pointing interaction technique which takes benefit of the mobile tactile surface combined with its surrounding space. A user can perform continuous relative pointing starting on the pad then continuing in the free space around the pad, within arm's reach. As a first step toward E-Pad, we first introduce extended continuous relative pointing gestures and conduct a preliminary study to determine how people move their hand around the mobile device. We then conduct an experiment that compares the performance of E-Pad and Pad. Our findings indicate that E-Pad is faster than Pad and decreases the number of clutches without compromising accuracy. Our findings also suggest an overwhelming preference for E-Pad

    Which One is Me?: Identifying Oneself on Public Displays

    Get PDF
    While user representations are extensively used on public displays, it remains unclear how well users can recognize their own representation among those of surrounding users. We study the most widely used representations: abstract objects, skeletons, silhouettes and mirrors. In a prestudy (N=12), we identify five strategies that users follow to recognize themselves on public displays. In a second study (N=19), we quantify the users' recognition time and accuracy with respect to each representation type. Our findings suggest that there is a significant effect of (1) the representation type, (2) the strategies performed by users, and (3) the combination of both on recognition time and accuracy. We discuss the suitability of each representation for different settings and provide specific recommendations as to how user representations should be applied in multi-user scenarios. These recommendations guide practitioners and researchers in selecting the representation that optimizes the most for the deployment's requirements, and for the user strategies that are feasible in that environment

    A three-step interaction pattern for improving discoverability in finger identification techniques

    Get PDF
    Publié dans : UIST'14 Adjunct Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technologyInternational audienceIdentifying which fingers are in contact with a multi-touch surface provides a very large input space that can be leveraged for command selection. However, the numerous possibilities enabled by such vast space come at the cost of discoverability. To alleviate this problem, we introduce a three-step interaction pattern inspired by hotkeys that also supports feedforward. We illustrate this interaction with three applications allowing us to explore and adapt it in different context

    Cross-display attention switching in mobile interaction with large displays

    Get PDF
    Mobile devices equipped with features (e.g., camera, network connectivity and media player) are increasingly being used for different tasks such as web browsing, document reading and photography. While the portability of mobile devices makes them desirable for pervasive access to information, their small screen real-estate often imposes restrictions on the amount of information that can be displayed and manipulated on them. On the other hand, large displays have become commonplace in many outdoor as well as indoor environments. While they provide an efficient way of presenting and disseminating information, they provide little support for digital interactivity or physical accessibility. Researchers argue that mobile phones provide an efficient and portable way of interacting with large displays, and the latter can overcome the limitations of the small screens of mobile devices by providing a larger presentation and interaction space. However, distributing user interface (UI) elements across a mobile device and a large display can cause switching of visual attention and that may affect task performance. This thesis specifically explores how the switching of visual attention across a handheld mobile device and a vertical large display can affect a single user's task performance during mobile interaction with large displays. It introduces a taxonomy based on the factors associated with the visual arrangement of Multi Display User Interfaces (MDUIs) that can influence visual attention switching during interaction with MDUIs. It presents an empirical analysis of the effects of different distributions of input and output across mobile and large displays on the user's task performance, subjective workload and preference in the multiple-widget selection task, and in visual search tasks with maps, texts and photos. Experimental results show that the selection of multiple widgets replicated on the mobile device as well as on the large display, versus those shown only on the large display, is faster despite the cost of initial attention switching in the former. On the other hand, a hybrid UI configuration where the visual output is distributed across the mobile and large displays is the worst, or equivalent to the worst, configuration in all the visual search tasks. A mobile device-controlled large display configuration performs best in the map search task and equal to best (i.e., tied with a mobile-only configuration) in text- and photo-search tasks

    Comparing direct and indirect interaction in stroke rehabilitation

    Get PDF
    We explore the differences of direct (DI) vs. indirect (IDI) interaction in stroke rehabilitation. Direct interaction is when the patients move their arms in reaction to changes in the augmented physical environment; indirect interaction is when the patients move their arms in reaction to changes on a computer screen. We developed a rehabilitation game in both settings evaluated by a within-subject study with 10 patients with chronic stroke, aiming to answer 2 major questions: (i) do the game scores in either of the two interaction modes correlate with clinical assessment scores? and (ii) whether performance is different using direct versus indirect interaction in patients with stroke. Our experimental results confirm higher performance in use of DI over IDI. They also suggest better correlation of DI and clinical scores. Our study provides evidence for the benefits of direct interaction therapies vs. indirect computer-assisted therapies in stroke rehabilitation
    corecore