311 research outputs found

    Bimanual marking menu for near surface interactions

    Get PDF
    ABSTRACT We describe a mouseless, near-surface version of the Bimanual Marking Menu system. To activate the menu system, users create a pinch gesture with either their index or middle finger to initiate a left click or right click. Then they mark in the 3D space near the interactive area. We demonstrate how the system can be implemented using a commodity range camera such as the Microsoft Kinect, and report on several designs of the 3D marking system. Like the multi-touch marking menu, our system offers a large number of accessible commands. Since it does not rely on contact points to operate, our system leaves the nondominant hand available for other multi-touch interactions

    Tracking hands in action for gesture-based computer input

    Get PDF
    This thesis introduces new methods for markerless tracking of the full articulated motion of hands and for informing the design of gesture-based computer input. Emerging devices such as smartwatches or virtual/augmented reality glasses are in need of new input devices for interaction on the move. The highly dexterous human hands could provide an always-on input capability without the actual need to carry a physical device. First, we present novel methods to address the hard computer vision-based hand tracking problem under varying number of cameras, viewpoints, and run-time requirements. Second, we contribute to the design of gesture-based interaction techniques by presenting heuristic and computational approaches. The contributions of this thesis allow users to effectively interact with computers through markerless tracking of hands and objects in desktop, mobile, and egocentric scenarios.Diese Arbeit stellt neue Methoden für die markerlose Verfolgung der vollen Artikulation der Hände und für die Informierung der Gestaltung der Gestik-Computer-Input. Emerging-Geräte wie Smartwatches oder virtuelle / Augmented-Reality-Brillen benötigen neue Eingabegeräte für Interaktion in Bewegung. Die sehr geschickten menschlichen Hände konnten eine immer-on-Input-Fähigkeit, ohne die tatsächliche Notwendigkeit, ein physisches Gerät zu tragen. Zunächst stellen wir neue Verfahren vor, um das visionbasierte Hand-Tracking-Problem des Hardcomputers unter variierender Anzahl von Kameras, Sichtweisen und Laufzeitanforderungen zu lösen. Zweitens tragen wir zur Gestaltung von gesture-basierten Interaktionstechniken bei, indem wir heuristische und rechnerische Ansätze vorstellen. Die Beiträge dieser Arbeit ermöglichen es Benutzern, effektiv interagieren mit Computern durch markerlose Verfolgung von Händen und Objekten in Desktop-, mobilen und egozentrischen Szenarien

    Tracking hands in action for gesture-based computer input

    Get PDF
    This thesis introduces new methods for markerless tracking of the full articulated motion of hands and for informing the design of gesture-based computer input. Emerging devices such as smartwatches or virtual/augmented reality glasses are in need of new input devices for interaction on the move. The highly dexterous human hands could provide an always-on input capability without the actual need to carry a physical device. First, we present novel methods to address the hard computer vision-based hand tracking problem under varying number of cameras, viewpoints, and run-time requirements. Second, we contribute to the design of gesture-based interaction techniques by presenting heuristic and computational approaches. The contributions of this thesis allow users to effectively interact with computers through markerless tracking of hands and objects in desktop, mobile, and egocentric scenarios.Diese Arbeit stellt neue Methoden für die markerlose Verfolgung der vollen Artikulation der Hände und für die Informierung der Gestaltung der Gestik-Computer-Input. Emerging-Geräte wie Smartwatches oder virtuelle / Augmented-Reality-Brillen benötigen neue Eingabegeräte für Interaktion in Bewegung. Die sehr geschickten menschlichen Hände konnten eine immer-on-Input-Fähigkeit, ohne die tatsächliche Notwendigkeit, ein physisches Gerät zu tragen. Zunächst stellen wir neue Verfahren vor, um das visionbasierte Hand-Tracking-Problem des Hardcomputers unter variierender Anzahl von Kameras, Sichtweisen und Laufzeitanforderungen zu lösen. Zweitens tragen wir zur Gestaltung von gesture-basierten Interaktionstechniken bei, indem wir heuristische und rechnerische Ansätze vorstellen. Die Beiträge dieser Arbeit ermöglichen es Benutzern, effektiv interagieren mit Computern durch markerlose Verfolgung von Händen und Objekten in Desktop-, mobilen und egozentrischen Szenarien

    Form giving through gestural interaction to shape changing objects

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2012.Cataloged from PDF version of thesis.Includes bibliographical references.Shape-shifting materials have been part of sci-fi literature for decades. But if tomorrow we invent them, how are we going to communicate to them what shape we want them to morph into? If we look at our history, for thousands of years humans have been using the dexterity of their hands as primary means to alter the topology of their surroundings. While direct manipulation, as a primary method for form giving, allows for high precision deformation, the scope of interaction is limited to the scale of the hand. In order to extend the scope of manipulation beyond the hand scale, tools were invented to reach further and to augment the capabilities of our hands. In this thesis, I propose "Amphorm", a perceptually equivalent example of Radical Atoms, our vision on the interaction techniques for future, highly malleable, shape-shifting materials. "Amphorm" is a cylindrical kinetic sculpture that resembles a vase. Since "Amphorm" is a dual citizen between the digital and the physical world, its shape can be altered in both worlds. I describe novel interaction techniques for rapid shape deformation both in the physical world through free hand gestures and in the digital world through a Graphical User Interface. Additionally I explore how the physical world could be synchronized with the digital world and how tools from both worlds can jointly alter dual-citizens.by Dávid Lakatos.S.M

    Automated Tracking of Hand Hygiene Stages

    Get PDF
    The European Centre for Disease Prevention and Control (ECDC) estimates that 2.5 millioncases of Hospital Acquired Infections (HAIs) occur each year in the European Union. Handhygiene is regarded as one of the most important preventive measures for HAIs. If it is implemented properly, hand hygiene can reduce the risk of cross-transmission of an infection in the healthcare environment. Good hand hygiene is not only important for healthcare settings. Therecent ongoing coronavirus pandemic has highlighted the importance of hand hygiene practices in our daily lives, with governments and health authorities around the world promoting goodhand hygiene practices. The WHO has published guidelines of hand hygiene stages to promotegood hand washing practices. A significant amount of existing research has focused on theproblem of tracking hands to enable hand gesture recognition. In this work, gesture trackingdevices and image processing are explored in the context of the hand washing environment.Hand washing videos of professional healthcare workers were carefully observed and analyzedin order to recognize hand features associated with hand hygiene stages that could be extractedautomatically. Selected hand features such as palm shape (flat or curved); palm orientation(palms facing or not); hand trajectory (linear or circular movement) were then extracted andtracked with the help of a 3D gesture tracking device - the Leap Motion Controller. These fea-tures were further coupled together to detect the execution of a required WHO - hand hygienestage,Rub hands palm to palm, with the help of the Leap sensor in real time. In certain conditions, the Leap Motion Controller enables a clear distinction to be made between the left andright hands. However, whenever the two hands came into contact with each other, sensor data from the Leap, such as palm position and palm orientation was lost for one of the two hands.Hand occlusion was found to be a major drawback with the application of the device to this usecase. Therefore, RGB digital cameras were selected for further processing and tracking of the hands. An image processing technique, using a skin detection algorithm, was applied to extractinstantaneous hand positions for further processing, to enable various hand hygiene poses to be detected. Contour and centroid detection algorithms were further applied to track the handtrajectory in hand hygiene video recordings. In addition, feature detection algorithms wereapplied to a hand hygiene pose to extract the useful hand features. The video recordings did not suffer from occlusion as is the case for the Leap sensor, but the segmentation of one handfrom another was identified as a major challenge with images because the contour detectionresulted in a continuous mass when the two hands were in contact. For future work, the datafrom gesture trackers, such as the Leap Motion Controller and cameras (with image processing)could be combined to make a robust hand hygiene gesture classification system

    AirMouse: Finger Gesture for 2D and 3D Interaction

    Get PDF
    International audienceThis paper presents AirMouse, a new interaction technique based on finger gestures above the laptop's keyboard. At a reasonably low cost, the technique can replace the traditional methods for pointing in two or three dimensions. Moreover, the device-switching time is reduced and no additional surface than the one for the laptop is needed. In a 2D pointing evaluation, a vision-based implementation of the technique is compared with commonly used devices. The same implementation is also compared with the two most commonly used 3D pointing devices. The two user experiments show the benefits of the polyvalent technique: it is easy to learn, intuitive and efficient by providing good performance. In particular, our conducted experiment shows that performance with AirMouse is promising in comparison with a touchpad and with dedicated 3D pointing devices. It shows that AirMouse offers better performance as compared to FlowMouse, a previous solution using fingers above the keyboard

    Computational Modeling and Experimental Research on Touchscreen Gestures, Audio/Speech Interaction, and Driving

    Full text link
    As humans are exposed to rapidly evolving complex systems, there are growing needs for humans and systems to use multiple communication modalities such as auditory, vocal (or speech), gesture, or visual channels; thus, it is important to evaluate multimodal human-machine interactions in multitasking conditions so as to improve human performance and safety. However, traditional methods of evaluating human performance and safety rely on experimental settings using human subjects which require costly and time-consuming efforts to conduct. To minimize the limitations from the use of traditional usability tests, digital human models are often developed and used, and they also help us better understand underlying human mental processes to effectively improve safety and avoid mental overload. In this regard, I have combined computational cognitive modeling and experimental methods to study mental processes and identify differences in human performance/workload in various conditions, through this dissertation research. The computational cognitive models were implemented by extending the Queuing Network-Model Human Processor (QN-MHP) Architecture that enables simulation of human multi-task behaviors and multimodal interactions in human-machine systems. Three experiments were conducted to investigate human behaviors in multimodal and multitasking scenarios, combining the following three specific research aims that are to understand: (1) how humans use their finger movements to input information on touchscreen devices (i.e., touchscreen gestures), (2) how humans use auditory/vocal signals to interact with the machines (i.e., audio/speech interaction), and (3) how humans drive vehicles (i.e., driving controls). Future research applications of computational modeling and experimental research are also discussed. Scientifically, the results of this dissertation research make significant contributions to our better understanding of the nature of touchscreen gestures, audio/speech interaction, and driving controls in human-machine systems and whether they benefit or jeopardize human performance and safety in the multimodal and concurrent task environments. Moreover, in contrast to the previous models for multitasking scenarios mainly focusing on the visual processes, this study develops quantitative models of the combined effects of auditory, tactile, and visual factors on multitasking performance. From the practical impact perspective, the modeling work conducted in this research may help multimodal interface designers minimize the limitations of traditional usability tests and make quick design comparisons, less constrained by other time-consuming factors, such as developing prototypes and running human subjects. Furthermore, the research conducted in this dissertation may help identify which elements in the multimodal and multitasking scenarios increase workload and completion time, which can be used to reduce the number of accidents and injuries caused by distraction.PHDIndustrial & Operations EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/143903/1/heejinj_1.pd

    Convex Interaction : VR o mochiita kōdō asshuku ni yoru kūkanteki intarakushon no kakuchō

    Get PDF
    corecore