20 research outputs found

    Comparing direct and indirect interaction in stroke rehabilitation

    Get PDF
    We explore the differences of direct (DI) vs. indirect (IDI) interaction in stroke rehabilitation. Direct interaction is when the patients move their arms in reaction to changes in the augmented physical environment; indirect interaction is when the patients move their arms in reaction to changes on a computer screen. We developed a rehabilitation game in both settings evaluated by a within-subject study with 10 patients with chronic stroke, aiming to answer 2 major questions: (i) do the game scores in either of the two interaction modes correlate with clinical assessment scores? and (ii) whether performance is different using direct versus indirect interaction in patients with stroke. Our experimental results confirm higher performance in use of DI over IDI. They also suggest better correlation of DI and clinical scores. Our study provides evidence for the benefits of direct interaction therapies vs. indirect computer-assisted therapies in stroke rehabilitation

    Interaction techniques for older adults using touchscreen devices : a literature review

    Get PDF
    International audienceSeveral studies investigated different interaction techniques and input devices for older adults using touchscreen. This literature review analyses the population involved, the kind of tasks that were executed, the apparatus, the input techniques, the provided feedback, the collected data and author's findings and their recommendations. As conclusion, this review shows that age-related changes, previous experience with technologies, characteristics of handheld devices and use situations need to be studied

    Gaze+touch vs. touch: what’s the trade-off when using gaze to extend touch to remote displays?

    Get PDF
    Direct touch input is employed on many devices, but it is inherently restricted to displays that are reachable by the user. Gaze input as a mediator can extend touch to remote displays - using gaze for remote selection, and touch for local manipulation - but at what cost and benefit? In this paper, we investigate the potential trade-off with four experiments that empirically compare remote Gaze+touch to standard touch. Our experiments investigate dragging, rotation, and scaling tasks. Results indicate that Gaze+touch is, compared to touch, (1) equally fast and more accurate for rotation and scaling, (2) slower and less accurate for dragging, and (3) enables selection of smaller targets. Our participants confirm this trend, and are positive about the relaxed finger placement of Gaze+touch. Our experiments provide detailed performance characteristics to consider for the design of Gaze+touch interaction of remote displays. We further discuss insights into strengths and drawbacks in contrast to direct touch

    Relative and Absolute Mappings for Rotating Remote 3D Objects on Multi-Touch Tabletops

    Get PDF
    The use of human fingers as an object selection and manipulation tool has raised significant challenges when interacting with direct-touch tabletop displays. This is particularly an issue when manipulating remote objects in 3D environments as finger presses can obscure objects at a distance that are rendered very small. Techniques to support remote manipulation either provide absolute mappings between finger presses and object transformation or rely on tools that support relative mappings t o selected objects. This paper explores techniques to manipulate remote 3D objects on direct-touch tabletops using absolute and relative mapping modes. A user study was conducted to compare absolute and relative mappings in support of a rotation task. Overall results did not show a statistically significant difference between these two mapping modes on both task completion time and the number of touches. However, the absolute mapping mode was found to be less efficient than the relative mapping mode when rotating a small object. Also participants preferred relative mapping for small objects. Four mapping techniques were then compared for perceived ease of use and learnability. Touchpad, voodoo doll and telescope techniques were found to be comparable for manipulating remote objects in a 3D scene. A flying camera technique was considered too complex and required increased effort by participants. Participants preferred an absolute mapping technique augmented to support small object manipulation, e.g. the voodoo doll technique

    Multi-touch RST in 2D and 3D Spaces: Studying the Impact of Directness on User Performance

    Get PDF
    International audienceThe RST multi-touch technique allows one to simultaneously control \emph{Rotations}, \emph{Scaling}, and \emph{Translations} from multi-touch gestures. We conducted a user study to better understand the impact of directness on user performance for a RST docking task, for both 2D and 3D visualization conditions. This study showed that direct-touch shortens completion times, but indirect interaction improves efficiency and precision, and this is particularly true for 3D visualizations. The study also showed that users' trajectories are comparable for all conditions (2D/3D and direct/indirect). This tends to show that indirect RST control may be valuable for interactive visualization of 3D content. To illustrate this finding, we present a demo application that allows novice users to arrange 3D objects on a 2D virtual plane in an easy and efficient way

    Visualization of Tree-Structured Data Through a Multi Touch User Interface

    Get PDF
    This writing project examines different types of visualizations for tree-structured data sets. Visualizations discussed include link-node diagrams and treemap diagrams. Also discussed is recent innovations with regards to distinguishing multi touch from single touch technology. I explore the requirements needed to build a multi touch table top surface, and describe the process of building one. I then describe my proposed method of visualizing tree-structured data and how it can be implemented using Core Animation technology. I also propose a means of interacting with the data through a multi touch interface, and discuss which gestures can be used to navigate the visualization display

    Multi-Touch

    Get PDF
    The main contribution in this project is first, by optimize the multi-touch simulation to demonstrate the multi input abilities and proceed with hardware implementation which will be in the second stage of this project, Final Year Project (FYP) II

    BiTouch and BiPad: Designing Bimanual Interaction for Hand-held Tablets

    Get PDF
    International audienceDespite the demonstrated benefits of bimanual interaction, most tablets use just one hand for interaction, to free the other for support. In a preliminary study, we identified five holds that permit simultaneous support and interaction, and noted that users frequently change position to combat fatigue. We then designed the BiTouch design space, which introduces a support function in the kinematic chain model for interacting with hand-held tablets, and developed BiPad, a toolkit for creating bimanual tablet interaction with the thumb or the fingers of the supporting hand. We ran a controlled experiment to explore how tablet orientation and hand position affect three novel techniques: bimanual taps, gestures and chords. Bimanual taps outperformed our one-handed control condition in both landscape and portrait orientations; bimanual chords and gestures in portrait mode only; and thumbs outperformed fingers, but were more tiring and less stable. Together, BiTouch and BiPad offer new opportunities for designing bimanual interaction on hand-held tablets

    Evaluation eines Multitouch-basierten Menüs für Magic Lenses im Vergleich zu klassischen Menüs

    Get PDF
    Die komplexe Analysen großer Datensätze stellt in der Informationsvisualisierung eine zunehmende Herausforderung dar. Mit Hilfe von Magic Lenses wird die teilweise unübersichtliche Visualisierung dieser Datensätze lokal manipuliert und vereinfacht. Dabei spielen besonders der Umfang an Filterfunktionen und wie sie verändert werden können eine Rolle. Die Vorteile eines Multitouchdisplays im Vergleich zu herkömmlicher Maus- oder Keyboardinteraktion kommen dabei zum Einsatz. Es treten jedoch stetig Probleme bei der Adaption bisheriger Menüdesigns auf. Da Magic Lenses über Menüs parametrisiert werden sollen, stellt sich die Frage welche Menü- und Interaktionsarten besser dazu geeignet sind. In dieser Arbeit wird ein Multitouch-basiertes Kontextmenü für Magic Lenses evaluiert. Es befindet sich direkt am Linsenrand und ist kompakt jedoch neuartig. Es wird die Konkurrenzfähigkeit zu einem speziell für die Studie entworfenen und implementierten klassischen, globalen Menü getestet. Dieses ist für Nutzer vertrauter, aber distanziert sich von der Linse. Die Ergebnisse werden anschließend unter quantitativen und qualitativen Punkten zusammengefasst und ausgewertet. Es zeigt sich, dass klassische Menüs mit Touchinteraktion performanter waren. Das Multitouch-basierte Kontextmenü war allerdings bei den Probanden beliebter und wurde für die Arbeit mit Linsen bevorzugt.The complex analysis of huge data sets is an increasing challenge in information visualization. With the help of emph{Magic Lenses} the somtimes confusing visualization of those data sets is being locally manipulated and simplified. Especially the amount of filter functions and how they can be altered matter. The advantages of multitouch displays in comparison to conventional mouse and keyboard interaction are used. However, there occur continual problems with the adaption of existing menu designs. Since emph{Magic Lenses} have to be parameterized with those menus the question arises which kind of menu and interaction are more fitting. In this work we evaluate a multitouch based context menu for emph{Magic Lenses}. It is located directly at the lens and compact but new to users. We test its competitiveness to a classical global menu specifically designed and implemented for this study. Users are more familiar with it but it is distanced from the lens. Finally, the results are summarized and analyzed under quantitative and qualitative points. It turns out that classical menus with touch interaction performed best. The multitouch-based context menu though was more popular with probands and was preferred for the work with lenses
    corecore