19 research outputs found

    Vision-based interaction within a multimodal framework

    Get PDF
    Our contribution is to the field of video-based interaction techniques and is integrated in the home environment of the EMBASSI project. This project addresses innovative methods of man-machine interaction achieved through the development of intelligent assistance and anthropomorphic user interfaces. Within this project, multimodal techniques represent a basic requirement, especially considering those related to the integration of modalities. We are using a stereoscopic approach to allow the natural selection of devices via pointing ges-tures. The pointing hand is segmented from the video images and the 3D position and orientation of the forefinger is calculated. This modality has a subsequent integration with that of speech, in the context of a multimodal interaction infrastructure. In a first phase, we use semantic fusion with amodal input, considering the modalities in a so-called late fusion state

    Intuitive Interaktion durch videobasierte Gestenerkennung

    Get PDF
    Hinter der Forschung an videobasierter Handgestenerkennung steht die Vision, Interaktion zwischen Mensch und Computer losgelöst von klassischen Eingabegeräten wie Maus und Tastatur zu realisieren. Das Ziel dieser Arbeit ist die Entwicklung von echtzeitfähigen Verfahren, die eine robuste und fehlerarme Erkennung menschlicher Handgesten realisieren und so die Bedienung eines Computersystems auch für technisch unerfahrene Anwender nutzbar machen. In dieser Arbeit werden vier Verfahren entwickelt, die unterschiedliche Arten der Interaktion durch videobasierte Handgestenerkennung realisieren.The vision behind research on video based hand gesture recognition is to realise a new kind of interaction between humans and computer beyond the classical input devices such as mouse and keyboard. The aim of this thesis is to develop new video based realtime algorithms, which enable a robust and accurate recognition of human hand gestures and allow interaction with the computer even for technically unversed users. In this thesis four different algorithms are developed that can be used for intuitive interaction purposes depending on the demands and needs of different scenario applications

    Intuitive Interaktion durch videobasierte Gestenerkennung

    No full text
    Hinter der Forschung an videobasierter Handgestenerkennung steht die Vision, Interaktion zwischen Mensch und Computer losgelöst von klassischen Eingabegeräten wie Maus und Tastatur zu realisieren. Das Ziel dieser Arbeit ist die Entwicklung von echtzeitfähigen Verfahren, die eine robuste und fehlerarme Erkennung menschlicher Handgesten realisieren und so die Bedienung eines Computersystems auch für technisch unerfahrene Anwender nutzbar machen. In dieser Arbeit werden vier Verfahren entwickelt, die unterschiedliche Arten der Interaktion durch videobasierte Handgestenerkennung realisieren. The vision behind research on video based hand gesture recognition is to realise a new kind of interaction between humans and computer beyond the classical input devices such as mouse and keyboard. The aim of this thesis is to develop new video based realtime algorithms, which enable a robust and accurate recognition of human hand gestures and allow interaction with the computer even for technically unversed users. In this thesis four different algorithms are developed that can be used for intuitive interaction purposes depending on the demands and needs of different scenario applications

    Interactive Museum Exhibit Using Pointing Gesture Recognition

    No full text
    This paper describes a Mixed Reality-supported interactive museum exhibit. Using an easy and intuitive pointing gesture recognition system, the museum visitor is able to create his/her own exhibit choosing between different painters, artistic topics or just between different images. The usage of a video-based gesture tracking system ensures a seamless integration of Mixed Reality technologies into the environment of a traditional museum. Furthermore, it addresses even technically unversed users due to the fact that no physical devices need to be used and even no training phase is necessary for the interaction. The display of digitised paintings on an interactive screen is usable e.g. in museums that do not have the space to present all of their paintings in the traditional ways. Furthermore, direct interaction with art pieces leads to a deeper involvement with and understanding of the art pieces, whereas the manipulation of original paintings is obviously prohibited. Exploration of the paintings is achieved by giving the user the possibility of looking at details of paintings, which he/she normally can only see using tools like a magnifying glass

    Interactive museum exhibit using pointing gesture recognition

    Get PDF
    This paper describes a Mixed Reality-supported interactive museum exhibit. Using an easy and intuitive pointing gesture recognition system, the museum visitor is able to create his/her own exhibit choosing between different painters, artistic topics or just between different images. The usage of a video-based gesture tracking system ensures a seamless integration of Mixed Reality technologies into the environment of a traditional museum. Furthermore, it addresses even technically unversed users due to the fact that no physical devices need to be used and even no training phase is necessary for the interaction. The display of digitised paintings on an interactive screen is usable e.g. in museums that do not have the space to present all of their paintings in the traditional ways. Furthermore, direct interaction with art pieces leads to a deeper involvement with and understanding of the art pieces, whereas the manipulation of original paintings is obviously prohibited. Exploration of the paintings is achieved by giving the user the possibility of looking at details of paintings, which he/she normally can only see using tools like a magnifying glass

    Dynamic Gestural Interaction with Immersive Environments

    Get PDF
    This paper describes our ongoing research work on deviceless interaction using hand gesture recognition with a calibrated stereo system. Video-based interaction is one of the most intuitive kinds of Human-Computer- Interaction with Virtual-Reality applications due to the fact that users are not wired to a computer. This kind of interaction should therefore be considered as the preferred kind of interaction especially for technically unversed users. Especially, if interaction with three-dimensional environments is considered, pointing is one of the most intuitive kind of interaction used by humans. Nevertheless, using a pointing posture solely lacks of some classical interaction metaphors like moving objects in 3D space. As the most important additional gestures beside the recognition of a pointing gesture, 'grab' and 'release' gestures has been identified to enable an interactive movement of 3D objects in virtual worlds. This paper describes our video-based gesture recognition system that uses two calibrated cameras observing the user in front of a large displaying screen, identifying three different hand gestures in real time and determining 3D information like the 3D position of the user's hand or the pointing direction if performed. Different scenario applications like a virtual chess game against the computer or an industrial scenario (placing filters to an air pump system in 3D space) were developed and tested

    Intuitive Interaktion durch videobasierte Gestenerkennung

    No full text
    The vision behind research on video based hand gesture recognition is to realise a new kind of interaction between humans and computer beyond the classical input devices such as mouse and keyboard. The aim of this thesis is to develop new video based realtime algorithms, which enable a robust and accurate recognition of human hand gestures and allow interaction with the computer even for technically unversed users. In this thesis four different algorithms are developed that can be used for intuitive interaction purposes depending on the demands and needs of different scenario applications

    Abstract

    No full text
    As the capabilities of video standards and receiver hardware are increasing towards integrated 3d animations, generating realistic content is now becoming a limiting factor. In this paper we present a new technique of generating 3d content from reality, i.e. from video sequences acquired with normal TV cameras. The major aim is to provide the TV viewer with animated 3d reconstructions of athletic events in MPEG-4 over Digital Video Broadcast (DVB), which allows for an immersive experience via free navigation and interaction on the receiver side. As intervention in the actual scene, e.g. by markers, is often prohibited, markerless computer vision techniques are used on the images from normal broadcasting cameras for the accurate estimation of an athlete’s movements. The paper focuses on the key components for the realistic reconstruction of 3d geometric features, which are the calibration of moving TV cameras and the modelling of the moving athlete in its environment. 1

    3D reconstruction of sports events for digital TV

    Get PDF
    As the capabilities of video standards and receiver hardware are increasing towards integrated 3d animations, generating realistic content is now becoming a limiting factor. In this paper we present a new technique of generating 3d content from reality, i.e. from video sequences acquired with normal TV cameras. The major aim is to provide the TV viewer with animated 3d reconstructions of athletic events in MPEG-4 over Digital Video Broadcast (DVB), which allows for an immersive experience via free navigation and interaction on the receiver side. As intervention in the actual scene, e.g. by markers, is often prohibited, markerless computer vision techniques are used on the images from normal broadcasting cameras for the accurate estimation of an athlete’s movements. The paper focuses on the key components for the realistic reconstruction of 3d geometric features, which are the calibration of moving TV cameras and the modelling of the moving athlete in its environment

    Skill Measurement Through Real-Time 3D Reconstruction and 3D Pose Estimation

    No full text
    Skill measurement for sports analysis is still a challenging issue for sports scientists. In recent years, video based analysis of body movements and its corrections has become a widespread tool within modern athlete training centres throughout the different training stages of the athletes' pre-competition preparation. The challenges imposed by this supporting service are manifold and range from high resolution capturing of the athlete to a precise real-time reconstruction of body movements. By augmenting ideal virtual poses to the real motions, a coach is able to analyze and communicate incorrect or even incomplete motions of the athlete. For the precise measurement of body motions, typically motion capture systems (MoCap) are used. During the last decade marker based motion capture systems matured and did find their way into the area of medical and sports science. Markerless motion capturing systems, however, are still in its infancy. This paper describes a new approach to markerless motion capturing and its use for sports skills measurement and analysis
    corecore