6,701 research outputs found
3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit
We present 3DTouch, a novel 3D wearable input device worn on the fingertip
for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D
input device that is self-contained, mobile, and universally working across
various 3D platforms. This paper presents a low-cost solution to designing and
implementing such a device. Our approach relies on relative positioning
technique using an optical laser sensor and a 9-DOF inertial measurement unit.
3DTouch is self-contained, and designed to universally work on various 3D
platforms. The device employs touch input for the benefits of passive haptic
feedback, and movement stability. On the other hand, with touch interaction,
3DTouch is conceptually less fatiguing to use over many hours than 3D spatial
input devices. We propose a set of 3D interaction techniques including
selection, translation, and rotation using 3DTouch. An evaluation also
demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for
subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a
whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure
Gesture recognition through angle space
As the notion of ubiquitous computing becomes a reality, the keyboard and mouse paradigm become less satisfactory as an input modality. The ability to interpret gestures can open another dimension in the user interface technology. In this paper, we present a novel approach for dynamic hand gesture modeling using neural networks. The results show high accuracy in detecting single and multiple gestures, which makes this a promising approach for gesture recognition from continuous input with undetermined boundaries. This method is independent of the input device and can be applied as a general back-end processor for gesture recognition systems
Recommended from our members
Pictures in Your Mind: Using Interactive Gesture-Controlled Reliefs to Explore Art
Tactile reliefs offer many benefits over the more classic raised line drawings or tactile diagrams, as depth, 3D shape, and surface textures are directly perceivable. Although often created for blind and visually impaired (BVI) people, a wider range of people may benefit from such multimodal material. However, some reliefs are still difficult to understand without proper guidance or accompanying verbal descriptions, hindering autonomous exploration.
In this work, we present a gesture-controlled interactive audio guide (IAG) based on recent low-cost depth cameras that can be operated directly with the hands on relief surfaces during tactile exploration. The interactively explorable, location-dependent verbal and captioned descriptions promise rapid tactile accessibility to 2.5D spatial information in a home or education setting, to online resources, or as a kiosk installation at public places.
We present a working prototype, discuss design decisions, and present the results of two evaluation studies: the first with 13 BVI test users and the second follow-up study with 14 test users across a wide range of people with differences and difficulties associated with perception, memory, cognition, and communication. The participant-led research method of this latter study prompted new, significant and innovative developments
Investigation of a new method for improving image resolution for camera tracking applications
Camera based systems have been a preferred choice in many motion tracking applications due to the ease of installation and the ability to work in unprepared environments. The concept of these systems is based on extracting image information (colour and shape properties) to detect the object location. However, the resolution of the image and the camera field-of- view (FOV) are two main factors that can restrict the tracking applications for which these systems can be used. Resolution can be addressed partially by using higher resolution cameras but this may not always be possible or cost effective.
This research paper investigates a new method utilising averaging of offset images to improve the effective resolution using a standard camera. The initial results show that the minimum detectable position change of a tracked object could be improved by up to 4 times
- …