538 research outputs found
Using Haar-like feature classifiers for hand tracking in tabletop augmented reality
We propose in this paper a hand interaction approach
to Augmented Reality Tabletop applications. We detect
the userâs hands using haar-like feature classifiers and
correlate its positions with the fixed markers on the
table. This gives the user the possibility to move, rotate
and resize the virtual objects located over the table
with their bare hands.Postprint (published version
Gesture recognition through angle space
As the notion of ubiquitous computing becomes a reality, the keyboard and mouse paradigm become less satisfactory as an input modality. The ability to interpret gestures can open another dimension in the user interface technology. In this paper, we present a novel approach for dynamic hand gesture modeling using neural networks. The results show high accuracy in detecting single and multiple gestures, which makes this a promising approach for gesture recognition from continuous input with undetermined boundaries. This method is independent of the input device and can be applied as a general back-end processor for gesture recognition systems
Interactive exploration of historic information via gesture recognition
Developers of interactive exhibits often struggle to ïżœnd appropriate input devices
that enable intuitive control, permitting the visitors to engage eïżœectively with the
content. Recently motion sensing input devices like the Microsoft Kinect or Panasonic
D-Imager have become available enabling gesture based control of computer
systems. These devices present an attractive input device for exhibits since the user
can interact with their hands and they are not required to physically touch any part
of the system. In this thesis we investigate techniques to enable the raw data coming
from these types of devices to be used to control an interactive exhibit. Object
recognition and tracking techniques are used to analyse the user's hand where movement
and clicks are processed. To show the eïżœectiveness of the techniques the gesture
system is used to control an interactive system designed to inform the public about
iconic buildings in the centre of Norwich, UK. We evaluate two methods of making
selections in the test environment.
At the time of experimentation the technologies were relatively new to the image
processing environment. As a result of the research presented in this thesis, the techniques
and methods used have been detailed and published [3] at the VSMM (Virtual
Systems and Multimedia 2012) conference with the intention of further forwarding
the area
Recommended from our members
Pictures in Your Mind: Using Interactive Gesture-Controlled Reliefs to Explore Art
Tactile reliefs offer many benefits over the more classic raised line drawings or tactile diagrams, as depth, 3D shape, and surface textures are directly perceivable. Although often created for blind and visually impaired (BVI) people, a wider range of people may benefit from such multimodal material. However, some reliefs are still difficult to understand without proper guidance or accompanying verbal descriptions, hindering autonomous exploration.
In this work, we present a gesture-controlled interactive audio guide (IAG) based on recent low-cost depth cameras that can be operated directly with the hands on relief surfaces during tactile exploration. The interactively explorable, location-dependent verbal and captioned descriptions promise rapid tactile accessibility to 2.5D spatial information in a home or education setting, to online resources, or as a kiosk installation at public places.
We present a working prototype, discuss design decisions, and present the results of two evaluation studies: the first with 13 BVI test users and the second follow-up study with 14 test users across a wide range of people with differences and difficulties associated with perception, memory, cognition, and communication. The participant-led research method of this latter study prompted new, significant and innovative developments
Finger-stylus for non touch-enable systems
Since computer was invented, people are using many devices to interact with computer. Initially there were keyboard, mouse etc. but with advancement of technology, new ways are being discovered that are quite common and natural to the humans like stylus for touch-enabled systems. In the current age of technology, the user is expected to touch the machine interface to give input. Hand gesture is used in such a way to interact with machines where natural bare hand is used to communicate without touching machine interface. It gives a feeling to the user that he is interacting in a natural way with some human, not with traditional machines. This paper presents a technique where the user need not touch the machine interface to draw on the screen. Here hand finger draws shapes on monitor like stylus, without touching the monitor. This method can be used in many applications including games. The finger is used as an input device that acts like a paint-brush or finger-stylus and is used to make shapes in front of the camera. Fingertip extraction and motion tracking were done in Matlab with real time constraints. This work is an early attempt to replace stylus with the natural finger without touching the screen
A fast and robust hand-driven 3D mouse
The development of new interaction paradigms requires a natural interaction. This means that people should be able to interact with technology with the same models used to interact with everyday real life, that is through gestures, expressions, voice. Following this idea, in this paper we propose a non intrusive vision based tracking system able to capture hand motion and simple hand gestures. The proposed device allows to use the hand as a "natural" 3D mouse, where the forefinger tip or the palm centre are used to identify a 3D marker and the hand gesture can be used to simulate the mouse buttons. The approach is based on a monoscopic tracking algorithm which is computationally fast and robust against noise and cluttered backgrounds. Two image streams are processed in parallel exploiting multi-core architectures, and their results are combined to obtain a constrained stereoscopic problem. The system has been implemented and thoroughly tested in an experimental environment where the 3D hand mouse has been used to interact with objects in a virtual reality application. We also provide results about the performances of the tracker, which demonstrate precision and robustness of the proposed syste
- âŠ