2,111 research outputs found
3DTouch: A wearable 3D input device with an optical sensor and a 9-DOF inertial measurement unit
We present 3DTouch, a novel 3D wearable input device worn on the fingertip
for 3D manipulation tasks. 3DTouch is designed to fill the missing gap of a 3D
input device that is self-contained, mobile, and universally working across
various 3D platforms. This paper presents a low-cost solution to designing and
implementing such a device. Our approach relies on relative positioning
technique using an optical laser sensor and a 9-DOF inertial measurement unit.
3DTouch is self-contained, and designed to universally work on various 3D
platforms. The device employs touch input for the benefits of passive haptic
feedback, and movement stability. On the other hand, with touch interaction,
3DTouch is conceptually less fatiguing to use over many hours than 3D spatial
input devices. We propose a set of 3D interaction techniques including
selection, translation, and rotation using 3DTouch. An evaluation also
demonstrates the device's tracking accuracy of 1.10 mm and 2.33 degrees for
subtle touch interaction in 3D space. Modular solutions like 3DTouch opens up a
whole new design space for interaction techniques to further develop on.Comment: 8 pages, 7 figure
Multi-Level Sensory Interpretation and Adaptation in a Mobile Cube
Signals from sensors are often analyzed in a sequence of steps, starting with the raw sensor data and eventually ending up with a classification or abstraction of these data. This paper will give a practical example of how the same information can be trained and used to initiate multiple interpretations of the same data on different, application-oriented levels. Crucially, the focus is on expanding embedded analysis software, rather than adding more powerful, but possibly resource-hungry, sensors. Our illustration of this approach involves a tangible input device the shape of a cube that relies exclusively on lowcost accelerometers. The cube supports calibration with user supervision, it can tell which of its sides is on top, give an estimate of its orientation relative to the user, and recognize basic gestures
Recognition of elementary arm movements using orientation of a tri-axial accelerometer located near the wrist
In this paper we present a method for recognising three fundamental movements of the human arm (reach and retrieve, lift cup to mouth, rotation of the arm) by determining the orientation of a tri-axial accelerometer located near the wrist. Our objective is to detect the occurrence of such movements performed with the impaired arm of a stroke patient during normal daily activities as a means to assess their rehabilitation. The method relies on accurately mapping transitions of predefined, standard orientations of the accelerometer to corresponding elementary arm movements. To evaluate the technique, kinematic data was collected from four healthy subjects and four stroke patients as they performed a number of activities involved in a representative activity of daily living, 'making-a-cup-of-tea'. Our experimental results show that the proposed method can independently recognise all three of the elementary upper limb movements investigated with accuracies in the range 91–99% for healthy subjects and 70–85% for stroke patients
Pointing Devices for Wearable Computers
We present a survey of pointing devices for wearable computers, which are body-mounted devices that users can access at any time. Since traditional pointing devices (i.e., mouse, touchpad, and trackpoint) were designed to be used on a steady and flat surface, they are inappropriate for wearable computers. Just as the advent of laptops resulted in the development of the touchpad and trackpoint, the emergence of wearable computers is leading to the development of pointing devices designed for them. However, unlike laptops, since wearable computers are operated from different body positions under different environmental conditions for different uses, researchers have developed a variety of innovative pointing devices for wearable computers characterized by their sensing mechanism, control mechanism, and form factor. We survey a representative set of pointing devices for wearable computers using an “adaptation of traditional devices” versus “new devices” dichotomy and study devices according to their control and sensing mechanisms and form factor. The objective of this paper is to showcase a variety of pointing devices developed for wearable computers and bring structure to the design space for wearable pointing devices. We conclude that a de facto pointing device for wearable computers, unlike laptops, is not likely to emerge
GART: The Gesture and Activity Recognition Toolkit
Presented at the 12th International Conference on Human-Computer Interaction, Beijing, China, July 2007.The original publication is available at www.springerlink.comThe Gesture and Activity Recognition Toolit (GART) is
a user interface toolkit designed to enable the development of gesture-based
applications. GART provides an abstraction to machine learning
algorithms suitable for modeling and recognizing different types of
gestures. The toolkit also provides support for the data collection and
the training process. In this paper, we present GART and its machine
learning abstractions. Furthermore, we detail the components of the
toolkit and present two example gesture recognition applications
Recognition of elementary upper limb movements in an activity of daily living using data from wrist mounted accelerometers
In this paper we present a methodology as a proof of concept for recognizing fundamental movements of the humanarm (extension, flexion and rotation of the forearm) involved in ‘making-a-cup-of-tea’, typical of an activity of daily-living (ADL). The movements are initially performed in a controlled environment as part of a training phase and the data are grouped into three clusters using k-means clustering. Movements performed during ADL, forming part of the testing phase, are associated with each cluster label using a minimum distance classifier in a multi-dimensional feature space, comprising of features selected from a ranked set of 30 features, using Euclidean and Mahalonobis distance as the metric. Experiments were performed with four healthy subjects and our results show that the proposed methodology can detect the three movements with an overall average accuracy of 88% across all subjects and arm movement types using Euclidean distance classifier
Pointing Without a Pointer
We present a method for performing selection tasks based on continuous control of multiple, competing agents who try to determine the user's intentions from their control behaviour without requiring an explicit pointer. The entropy in the selection process decreases in a continuous fashion -- we provide experimental evidence of selection from 500 initial targets. The approach allows adaptation over time to best make use of the multimodal communication channel between the human and the system. This general approach is well suited to mobile and wearable applications, shared displays and security conscious settings
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
BodySpace: inferring body pose for natural control of a music player
We describe the BodySpace system, which uses inertial sensing and pattern recognition to allow the gestural control of a music player by placing the device at different parts of the body. We demonstrate a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based techniques can shape gestural interaction
- …