9 research outputs found

    Kin'touch: understanding how visually impaired people explore tactile maps

    Get PDF
    International audienceTactile or interactive maps are largely used as an orientation aid for visually impaired people. Yet, little is known about haptic exploration strategies and their influence on the resultant cognitive mapping. We have designed a prototype with the potential to automatically analyze different users' exploration strategies. This prototype integrates data from the MS Kinect camera and a multi-touch table. It registers location of hands and digits on a tactile map. Results of preliminary studies show that this approach is promising

    What Caused that Touch? Expressive Interaction with a Surface through Fiduciary-Tagged Gloves

    No full text
    The hand has incredible potential as an expressive input device. Yet most touch technologies imprecisely recognize limited hand parts (if at all), usually by inferring the hand part from the touch shapes. We introduce the fiduciarytagged glove as a reliable, inexpensive, and very expressive way to gather input about: (a) many parts of a hand (fingertips, knuckles, palms, sides, backs of the hand), and (b) to discriminate between one person’s or multiple peoples’ hands. Examples illustrate the interaction power gained by being able to identify and exploit these various hand parts. ACM Classification: H5.2 [Information interfaces an

    Usage and recognition of finger orientation for multi-touch tabletop interaction

    Get PDF
    Abstract. Building on the observation that finger orientation is an inherent part of human's interaction in the real world, exploiting finger orientation for multitouch tabletop interaction would facilitate more natural interaction techniques. We motivate this by means of examples where the finger orientation improves or enriches interaction. Afterwards, we present a simple and fast approach to detect the finger orientation reliably for multi-touch tabletop interaction. The steps involved are computationally cheap and therefore suit the needs of tracking software operating under time-critical conditions. We show that the presented approach enables the detection of finger orientation also for fingers that touch the tabletop surface only slightly. Further, recognition rates on real data gained from the camera within a multi-touch tabletop are presented in order to give a measure for the precision and reliability of the presented approach

    AUGMENTED TOUCH INTERACTIONS WITH FINGER CONTACT SHAPE AND ORIENTATION

    Get PDF
    Touchscreen interactions are far less expressive than the range of touch that human hands are capable of - even considering technologies such as multi-touch and force-sensitive surfaces. Recently, some touchscreens have added the capability to sense the actual contact area of a finger on the touch surface, which provides additional degrees of freedom - the size and shape of the touch, and the finger's orientation. These additional sensory capabilities hold promise for increasing the expressiveness of touch interactions - but little is known about whether users can successfully use the new degrees of freedom. To provide this baseline information, we carried out a study with a finger-contact-sensing touchscreen, and asked participants to produce a range of touches and gestures with different shapes and orientations, with both one and two fingers. We found that people are able to reliably produce two touch shapes and three orientations across a wide range of touches and gestures - a result that was confirmed in another study that used the augmented touches for a screen lock application

    Stereoscopic bimanual interaction for 3D visualization

    Get PDF
    Virtual Environments (VE) are being widely used in various research fields for several decades such as 3D visualization, education, training and games. VEs have the potential to enhance the visualization and act as a general medium for human-computer interaction (HCI). However, limited research has evaluated virtual reality (VR) display technologies, monocular and binocular depth cues, for human depth perception of volumetric (non-polygonal) datasets. In addition, a lack of standardization of three-dimensional (3D) user interfaces (UI) makes it challenging to interact with many VE systems. To address these issues, this dissertation focuses on evaluation of effects of stereoscopic and head-coupled displays on depth judgment of volumetric dataset. It also focuses on evaluation of a two-handed view manipulation techniques which support simultaneous 7 degree-of-freedom (DOF) navigation (x,y,z + yaw,pitch,roll + scale) in a multi-scale virtual environment (MSVE). Furthermore, this dissertation evaluates auto-adjustment of stereo view parameters techniques for stereoscopic fusion problems in a MSVE. Next, this dissertation presents a bimanual, hybrid user interface which combines traditional tracking devices with computer-vision based "natural" 3D inputs for multi-dimensional visualization in a semi-immersive desktop VR system. In conclusion, this dissertation provides a guideline for research design for evaluating UI and interaction techniques

    Expanding tangible tabletop interfaces beyond the display

    Get PDF
    L’augment de popularitat de les taules i superfĂ­cies interactives estĂ  impulsant la recerca i la innovaciĂł en una gran varietat d’àrees, incloent-­‐hi maquinari, programari, disseny de la interacciĂł i noves tĂšcniques d’interacciĂł. Totes, amb l’objectiu de promoure noves interfĂ­cies dotades d’un llenguatge mĂ©s ric, potent i natural. Entre totes aquestes modalitats, la interacciĂł combinada a sobre i per damunt de la superfĂ­cie de la taula mitjançant tangibles i gestos Ă©s actualment una Ă rea molt prometedora. Aquest document tracta d’expandir les taules interactives mĂ©s enllĂ  de la superfĂ­cie per mitjĂ  de l’exploraciĂł i el desenvolupament d’un sistema o dispositiu enfocat des de tres vessants diferents: maquinari, programari i disseny de la interacciĂł. Durant l’inici d’aquest document s’estudien i es resumeixen els diferents trets caracterĂ­stics de les superfĂ­cies interactives tangibles convencionals o 2D i es presenten els treballs previs desenvolupats per l’autor en solucions de programari que acaben resultant en aplicacions que suggereixen l’Ășs de la tercera dimensiĂł a les superfĂ­cies tangibles. Seguidament, es presenta un repĂ s del maquinari existent en aquest tipus d’interfĂ­cies per tal de concebre un dispositiu capaç de detectar gestos i generar visuals per sobre de la superfĂ­cie, per introduir els canvis realitzats a un dispositiu existent, desenvolupat i cedit per Microsoft Reseach Cambridge. Per tal d’explotar tot el potencial d’aquest nou dispositiu, es desenvolupa un nou sistema de visiĂł per ordinador que estĂ©n el seguiment d’objectes i mans en una superfĂ­cie 2D a la detecciĂł de mans, dits i etiquetes amb sis graus de llibertat per sobre la superfĂ­cie incloent-­‐hi la interacciĂł tangible i tĂ ctil convencional a la superfĂ­cie. Finalment, es presenta una eina de programari per a generar aplicacions per al nou sistema i es presenten un seguit d’aplicacions per tal de provar tot el desenvolupament generat al llarg de la tesi que es conclou presentant un seguit de gestos tant a la superfĂ­cie com per sobre d’aquesta i situant-­‐los en una nova classificaciĂł que alhora recull la interacciĂł convencional 2D i la interacciĂł estesa per damunt de la superfĂ­cie desenvolupada.The rising popularity of interactive tabletops and surfaces is spawning research and innovation in a wide variety of areas, including hardware and software technologies, interaction design and novel interaction techniques, all of which seek to promote richer, more powerful and more natural interaction modalities. Among these modalities, combined interaction on and above the surface, both with gestures and with tangible objects, is a very promising area. This dissertation is about expanding tangible and tabletops surfaces beyond the display by exploring and developing a system from the three different perspectives: hardware, software, and interaction design. This dissertation, studies and summarizes the distinctive affordances of conventional 2D tabletop devices, with a vast literature review and some additional use cases developed by the author for supporting these findings, and subsequently explores the novel and not yet unveiled potential affordances of 3D-­‐augmented tabletops. It overviews the existing hardware solutions for conceiving such a device, and applies the needed hardware modifications to an existing prototype developed and rendered to us by Microsoft Research Cambridge. For accomplishing the interaction purposes, it is developed a vision system for 3D interaction that extends conventional 2D tabletop tracking for the tracking of hand gestures, 6DoF markers and on-­‐surface finger interaction. It finishes by conceiving a complete software framework solution, for the development and implementation of such type of applications that can benefit from these novel 3D interaction techniques, and implements and test several software prototypes as proof of concepts, using this framework. With these findings, it concludes presenting continuous tangible interaction gestures and proposing a novel classification for 3D tangible and tabletop gestures

    3D-Modellierung mit interaktiven OberflÀchen

    Get PDF
    3D models are at the core of many important applications from industry, science, and also entertainment. The creation of 3D models is a complex and time consuming process. Current modeling tools are hard to learn and require a deep understanding of the underlying mathematical models. Furthermore, established input devices like the mouse and keyboard do not utilize the full interaction potential -- especially regarding bimanual control -- of the human hand. The growing interest and the commercial breakthrough of multi-touch displays and interactive surfaces raises questions about their potential in the context of 3d modeling, which are thoroughly discussed and evaluated in this work. The presented approach is closely aligned to the whole processing chain for multi-touch applications, starting with the hardware and tracking issues, continuing with fundamental design discussions and operations like selection and 3D manipulation of objects and finishing with complex modeling techniques and metaphors. In regard to hardware and tracking, a robust illumination setup for the diffuse illumination technique is presented along with two extensions of this approach, i.e., hover detection and hand distinction. The design space is organized into specific design dimensions characterized by extremal positions to allow a better overview of design choices and a classification of existing and future systems. Fundamental techniques for selection and integrated 3D manipulation with six degrees of freedom are presented and empirically evaluated. Finally, two established modeling techniques -- implicit surfaces and virtual sculpting -- are extended and evaluated for multi-touch input

    Analysing, visualising and supporting collaborative learning using interactive tabletops

    Get PDF
    The key contribution of this thesis is a novel approach to design, implement and evaluate the conceptual and technological infrastructure that captures student’s activity at interactive tabletops and analyses these data through Interaction Data Analytics techniques to provide support to teachers by enhancing their awareness of student’s collaboration. To achieve the above, this thesis presents a series of carefully designed user studies to understand how to capture, analyse and distil indicators of collaborative learning. We perform this in three steps: the exploration of the feasibility of the approach, the construction of a novel solution and the execution of the conceptual proposal, both under controlled conditions and in the wild. A total of eight datasets were analysed for the studies that are described in this thesis. This work pioneered in a number of areas including the application of data mining techniques to study collaboration at the tabletop, a plug-in solution to add user-identification to a regular tabletop using a depth sensor and the first multi-tabletop classroom used to run authentic collaborative activities associated with the curricula. In summary, while the mechanisms, interfaces and studies presented in this thesis were mostly explored in the context of interactive tabletops, the findings are likely to be relevant to other forms of groupware and learning scenarios that can be implemented in real classrooms. Through the mechanisms, the studies conducted and our conceptual framework this thesis provides an important research foundation for the ways in which interactive tabletops, along with data mining and visualisation techniques, can be used to provide support to improve teacher’s understanding about student’s collaboration and learning in small groups
    corecore