58 research outputs found

    Extremal Regions Detection Guided by Maxima of Gradient Magnitude

    Get PDF

    Lightweight palm and finger tracking for real-time 3D gesture control

    Get PDF
    peer reviewedWe present a novel technique implementing barehanded interaction with virtual 3D content by employing a time-of-flight camera. The system improves on existing 3D multi-touch systems by working regardless of lighting conditions and supplying a working volume large enough for multiple users. Previous systems were limited either by environmental requirements, working volume, or computational resources necessary for realtime operation. By employing a time-of-flight camera, the system is capable of reliably recognizing gestures at the finger level in real-time at more than 50 fps with commodity computer hardware using our newly developed precision hand and finger-tracking algorithm. Building on this algorithm, the system performs gesture recognition with simple constraint modeling over statistical aggregations of the hand appearances in a working volume of more than 8 cubic meters. Two iterations of user tests were performed on a prototype system, demonstrating the feasibility and usability of the approach as well as providing first insights regarding the acceptance of true barehanded touch-based 3D interaction

    Improving Multi-Touch Interactions Using Hands as Landmarks

    Get PDF
    Efficient command selection is just as important for multi-touch devices as it is for traditional interfaces that follow the Windows-Icons-Menus-Pointers (WIMP) model, but rapid selection in touch interfaces can be difficult because these systems often lack the mechanisms that have been used for expert shortcuts in desktop systems (such as keyboards shortcuts). Although interaction techniques based on spatial memory can improve the situation by allowing fast revisitation from memory, the lack of landmarks often makes it hard to remember command locations in a large set. One potential landmark that could be used in touch interfaces, however, is people’s hands and fingers: these provide an external reference frame that is well known and always present when interacting with a touch display. To explore the use of hands as landmarks for improving command selection, we designed hand-centric techniques called HandMark menus. We implemented HandMark menus for two platforms – one version that allows bimanual operation for digital tables and another that uses single-handed serial operation for handheld tablets; in addition, we developed variants for both platforms that support different numbers of commands. We tested the new techniques against standard selection methods including tabbed menus and popup toolbars. The results of the studies show that HandMark menus perform well (in several cases significantly faster than standard methods), and that they support the development of spatial memory. Overall, this thesis demonstrates that people’s intimate knowledge of their hands can be the basis for fast interaction techniques that improve performance and usability of multi-touch systems

    Virtual Valcamonica: collaborative exploration of prehistoric petroglyphs and their surrounding environment in multi-user virtual reality

    Get PDF
    In this paper, we present a novel, multi-user, virtual reality environment for the interactive, collaborative 3D analysis of large 3D scans and the technical advancements that were necessary to build it: a multi-view rendering system for large 3D point clouds, a suitable display infrastructure and a suite of collaborative 3D interaction techniques. The cultural heritage site of Valcamonica in Italy with its large collection of prehistoric rock-art served as an exemplary use case for evaluation. The results show that our output-sensitive level-of-detail rendering system is capable of visualizing a 3D dataset with an aggregate size of more than 14 billion points at interactive frame rates. The system design in this exemplar application results from close exchange with a small group of potential users: archaeologists with expertise in rock-art and allows them to explore the prehistoric art and its spatial context with highly realistic appearance. A set of dedicated interaction techniques was developed to facilitate collaborative visual analysis. A multi-display workspace supports the immediate comparison of geographically distributed artifacts. An expert review of the final demonstrator confirmed the potential for added value in rock-art research and the usability of our collaborative interaction techniques

    Real-time person re-identification for interactive environments

    Get PDF
    The work presented in this thesis was motivated by a vision of the future in which intelligent environments in public spaces such as galleries and museums, deliver useful and personalised services to people via natural interaction, that is, without the need for people to provide explicit instructions via tangible interfaces. Delivering the right services to the right people requires a means of biometrically identifying individuals and then re-identifying them as they move freely through the environment. Delivering the service they desire requires sensing their context, for example, sensing their location or proximity to resources. This thesis presents both a context-aware system and a person re-identification method. A tabletop display was designed and prototyped with an infrared person-sensing context function. In experimental evaluation it exhibited tracking performance comparable to other more complex systems. A real-time, viewpoint invariant, person re-identification method is proposed based on a novel set of Viewpoint Invariant Multi-modal (ViMM) feature descriptors collected from depth-sensing cameras. The method uses colour and a combination of anthropometric properties logged as a function of body orientation. A neural network classifier is used to perform re-identification

    The Performance and Preference of Different Fingers and Chords for Pointing, Dragging, and Object Transformation

    Get PDF
    International audienceThe development of robust methods to identify which finger is causing each touch point, called “finger identification,” will open up a new input space where interaction designers can associate system actions to different fingers. However, relatively little is known about the performance of specific fingers as single touch points or when used together in a “chord.” We present empirical results for accuracy, throughput, and subjective preference gathered in five experiments with 48 participants exploring all 10 fingers and 7 two-finger chords. Based on these results, we develop design guidelines for reasonable target sizes for specific fingers and two-finger chords, and a relative ranking of the suitability of fingers and two-finger chords for common multi-touch tasks. Our work contributes new knowledge regarding specific finger and chord performance and can inform the design of future interaction techniques and interfaces utilizing finger identification

    Leveraging finger identification to integrate multi-touch command selection and parameter manipulation

    Get PDF
    International audienceIdentifying which fingers are touching a multi-touch surface provides a very large input space. We describe FingerCuts, an interaction technique inspired by desktop keyboard shortcuts to exploit this potential. FingerCuts enables integrated command selection and parameter manipulation, it uses feed-forward and feedback to increase discoverability, it is backward compatible with current touch input techniques, and it is adaptable for different touch device form factors. We implemented three variations of FingerCuts, each tailored to a different device form factor: tabletop, tablet, and smartphone. Qualitative and quantitative studies conducted on the tabletop suggests that with some practice, FingerCuts is expressive, easy-to-use, and increases a sense of continuous interaction flow and that interaction with FingerCuts is as fast, or faster than using a graphical user interface. A theoretical analysis of FingerCuts using the Fingerstroke-Level Model (FLM) matches our quantitative study results, justifying our use of FLM to analyse and validate the performance for the other device form factors

    Phrasing Bimanual Interaction for Visual Design

    Get PDF
    Architects and other visual thinkers create external representations of their ideas to support early-stage design. They compose visual imagery with sketching to form abstract diagrams as representations. When working with digital media, they apply various visual operations to transform representations, often engaging in complex sequences. This research investigates how to build interactive capabilities to support designers in putting together, that is phrasing, sequences of operations using both hands. In particular, we examine how phrasing interactions with pen and multi-touch input can support modal switching among different visual operations that in many commercial design tools require using menus and tool palettes—techniques originally designed for the mouse, not pen and touch. We develop an interactive bimanual pen+touch diagramming environment and study its use in landscape architecture design studio education. We observe interesting forms of interaction that emerge, and how our bimanual interaction techniques support visual design processes. Based on the needs of architects, we develop LayerFish, a new bimanual technique for layering overlapping content. We conduct a controlled experiment to evaluate its efficacy. We explore the use of wearables to identify which user, and distinguish what hand, is touching to support phrasing together direct-touch interactions on large displays. From design and development of the environment and both field and controlled studies, we derive a set methods, based upon human bimanual specialization theory, for phrasing modal operations through bimanual interactions without menus or tool palettes
    • …
    corecore