1,625 research outputs found
DeepASL: Enabling Ubiquitous and Non-Intrusive Word and Sentence-Level Sign Language Translation
There is an undeniable communication barrier between deaf people and people
with normal hearing ability. Although innovations in sign language translation
technology aim to tear down this communication barrier, the majority of
existing sign language translation systems are either intrusive or constrained
by resolution or ambient lighting conditions. Moreover, these existing systems
can only perform single-sign ASL translation rather than sentence-level
translation, making them much less useful in daily-life communication
scenarios. In this work, we fill this critical gap by presenting DeepASL, a
transformative deep learning-based sign language translation technology that
enables ubiquitous and non-intrusive American Sign Language (ASL) translation
at both word and sentence levels. DeepASL uses infrared light as its sensing
mechanism to non-intrusively capture the ASL signs. It incorporates a novel
hierarchical bidirectional deep recurrent neural network (HB-RNN) and a
probabilistic framework based on Connectionist Temporal Classification (CTC)
for word-level and sentence-level ASL translation respectively. To evaluate its
performance, we have collected 7,306 samples from 11 participants, covering 56
commonly used ASL words and 100 ASL sentences. DeepASL achieves an average
94.5% word-level translation accuracy and an average 8.2% word error rate on
translating unseen ASL sentences. Given its promising performance, we believe
DeepASL represents a significant step towards breaking the communication
barrier between deaf people and hearing majority, and thus has the significant
potential to fundamentally change deaf people's lives
Pointing Devices for Wearable Computers
We present a survey of pointing devices for wearable computers, which are body-mounted devices that users can access at any time. Since traditional pointing devices (i.e., mouse, touchpad, and trackpoint) were designed to be used on a steady and flat surface, they are inappropriate for wearable computers. Just as the advent of laptops resulted in the development of the touchpad and trackpoint, the emergence of wearable computers is leading to the development of pointing devices designed for them. However, unlike laptops, since wearable computers are operated from different body positions under different environmental conditions for different uses, researchers have developed a variety of innovative pointing devices for wearable computers characterized by their sensing mechanism, control mechanism, and form factor. We survey a representative set of pointing devices for wearable computers using an “adaptation of traditional devices” versus “new devices” dichotomy and study devices according to their control and sensing mechanisms and form factor. The objective of this paper is to showcase a variety of pointing devices developed for wearable computers and bring structure to the design space for wearable pointing devices. We conclude that a de facto pointing device for wearable computers, unlike laptops, is not likely to emerge
A new method for interacting with multi-window applications on large, high resolution displays
Physically large display walls can now be constructed using off-the-shelf computer hardware. The high resolution
of these displays (e.g., 50 million pixels) means that a large quantity of data can be presented to users, so the
displays are well suited to visualization applications. However, current methods of interacting with display walls
are somewhat time consuming. We have analyzed how users solve real visualization problems using three desktop
applications (XmdvTool, Iris Explorer and Arc View), and used a new taxonomy to classify users’ actions and
illustrate the deficiencies of current display wall interaction methods. Following this we designed a novel methodfor interacting with display walls, which aims to let users interact as quickly as when a visualization application is used on a desktop system. Informal feedback gathered from our working prototype shows that interaction is both fast and fluid
The Design, Implementation, and Evaluation of a Pointing Device For a Wearable Computer
U.S. Air Force special tactics operators at times use small wearable computers (SWCs) for mission objectives. The primary pointing device of a SWC is either a touchpad or trackpoint, embedded into the chassis of the SWC. In situations where the user cannot directly interact with these pointing devices, the utility of the SWC is decreased. We developed a pointing device called the G3 that can be used for SWCs used by operators. The device utilizes gyroscopic sensors attached to the user’s index finger to move the computer cursor according to the angular velocity of his finger. We showed that, as measured by Fitts’s law, the overall performance and accuracy of the G3 was better than that of the touchpad and trackpoint. These findings suggest that the G3 can adequately be used with SWCs. Additionally, we investigated the G3\u27s utility as a control device for operating micro remotely piloted aircrafts
Orochi: Investigating Requirements and Expectations for Multipurpose Daily Used Supernumerary Robotic Limbs
Supernumerary robotic limbs (SRLs) present many opportunities for daily use. However, their obtrusiveness and limitations in interaction genericity hinder their daily use. To address challenges of daily use, we extracted three design considerations from previous literature and embodied them in a wearable we call Orochi. The considerations include the following: 1) multipurpose use, 2) wearability by context, and 3) unobtrusiveness in public. We implemented Orochi as a snake-shaped robot with 25 DoFs and two end effectors, and demonstrated several novel interactions enabled by its limber design. Using Orochi, we conducted hands-on focus groups to explore how multipurpose SRLs are used daily and we conducted a survey to explore how they are perceived when used in public. Participants approved Orochi's design and proposed different use cases and postures in which it could be worn. Orochi's unobtrusive design was generally well received, yet novel interactions raise several challenges for social acceptance. We discuss the significance of our results by highlighting future research opportunities based on the design, implementation, and evaluation of Orochi
Context-aware gestural interaction in the smart environments of the ubiquitous computing era
A thesis submitted to the University of Bedfordshire in partial fulfilment of the requirements for the degree of Doctor of PhilosophyTechnology is becoming pervasive and the current interfaces are not adequate for the interaction with the smart environments of the ubiquitous computing era. Recently, researchers have started to address this issue introducing the concept of natural user interface, which is mainly based on gestural interactions. Many issues are still open in this emerging domain and, in particular, there is a lack of common guidelines for coherent implementation of gestural interfaces.
This research investigates gestural interactions between humans and smart environments. It proposes a novel framework for the high-level organization of the context information. The framework is conceived to provide the support for a novel approach using functional gestures to reduce the gesture ambiguity and the number of gestures in taxonomies and improve the usability.
In order to validate this framework, a proof-of-concept has been developed. A prototype has been developed by implementing a novel method for the view-invariant recognition of deictic and dynamic gestures. Tests have been conducted to assess the gesture recognition accuracy and the usability of the interfaces developed following the proposed framework. The results show that the method provides optimal gesture recognition from very different view-points whilst the usability tests have yielded high scores.
Further investigation on the context information has been performed tackling the problem of user status. It is intended as human activity and a technique based on an innovative application of electromyography is proposed. The tests show that the proposed technique has achieved good activity recognition accuracy.
The context is treated also as system status. In ubiquitous computing, the system can adopt different paradigms: wearable, environmental and pervasive. A novel paradigm, called synergistic paradigm, is presented combining the advantages of the wearable and environmental paradigms. Moreover, it augments the interaction possibilities of the user and ensures better gesture recognition accuracy than with the other paradigms
Two Hand Gesture Based 3D Navigation in Virtual Environments
Natural interaction is gaining popularity due to its simple, attractive, and realistic nature, which realizes direct Human Computer Interaction (HCI). In this paper, we presented a novel two hand gesture based interaction technique for 3 dimensional (3D) navigation in Virtual Environments (VEs). The system used computer vision techniques for the detection of hand gestures (colored thumbs) from real scene and performed different navigation (forward, backward, up, down, left, and right) tasks in the VE. The proposed technique also allow users to efficiently control speed during navigation. The proposed technique is implemented via a VE for experimental purposes. Forty (40) participants performed the experimental study. Experiments revealed that the proposed technique is feasible, easy to learn and use, having less cognitive load on users. Finally gesture recognition engines were used to assess the accuracy and performance of the proposed gestures. kNN achieved high accuracy rates (95.7%) as compared to SVM (95.3%). kNN also has high performance rates in terms of training time (3.16 secs) and prediction speed (6600 obs/sec) as compared to SVM with 6.40 secs and 2900 obs/sec
- …