7,803 research outputs found
Low-cost natural interface based on head movements
Sometimes people look for freedom in the virtual world. However, not all have the possibility to interact with a computer in the same way. Nowadays, almost every job requires interaction with computerized systems, so people with physical impairments do not have the same freedom to control a mouse, a keyboard or a touchscreen. In the last years, some of the government programs to help people with reduced mobility suffered a lot with the global economic crisis and some of those programs were even cut down to reduce costs. This paper focuses on the development of a touchless human-computer interface, which allows anyone to control a computer without using a keyboard, mouse or touchscreen. By reusing Microsoft Kinect sensors from old videogames consoles, a cost-reduced, easy to use, and open-source interface was developed, allowing control of a computer using only the head, eyes or mouth movements, with the possibility of complementary sound commands. There are already available similar commercial solutions, but they are so expensive that their price tends to be a real obstacle in their purchase; on the other hand, free solutions usually do not offer the freedom that people with reduced mobility need. The present solution tries to address these drawbacks. (C) 2015 Published by Elsevier B.V
Distributed-Pair Programming can work well and is not just Distributed Pair-Programming
Background: Distributed Pair Programming can be performed via screensharing
or via a distributed IDE. The latter offers the freedom of concurrent editing
(which may be helpful or damaging) and has even more awareness deficits than
screen sharing. Objective: Characterize how competent distributed pair
programmers may handle this additional freedom and these additional awareness
deficits and characterize the impacts on the pair programming process. Method:
A revelatory case study, based on direct observation of a single, highly
competent distributed pair of industrial software developers during a 3-day
collaboration. We use recordings of these sessions and conceptualize the
phenomena seen. Results: 1. Skilled pairs may bridge the awareness deficits
without visible obstruction of the overall process. 2. Skilled pairs may use
the additional editing freedom in a useful limited fashion, resulting in
potentially better fluency of the process than local pair programming.
Conclusion: When applied skillfully in an appropriate context, distributed-pair
programming can (not will!) work at least as well as local pair programming
Human Robot Interface for Assistive Grasping
This work describes a new human-in-the-loop (HitL) assistive grasping system
for individuals with varying levels of physical capabilities. We investigated
the feasibility of using four potential input devices with our assistive
grasping system interface, using able-bodied individuals to define a set of
quantitative metrics that could be used to assess an assistive grasping system.
We then took these measurements and created a generalized benchmark for
evaluating the effectiveness of any arbitrary input device into a HitL grasping
system. The four input devices were a mouse, a speech recognition device, an
assistive switch, and a novel sEMG device developed by our group that was
connected either to the forearm or behind the ear of the subject. These
preliminary results provide insight into how different interface devices
perform for generalized assistive grasping tasks and also highlight the
potential of sEMG based control for severely disabled individuals.Comment: 8 pages, 21 figure
Advanced and natural interaction system for motion-impaired users
Human-computer interaction is an important area that searches for better and more comfortable systems to promote communication between humans and machines. Vision-based interfaces can offer a more natural and appealing way of communication. Moreover, it can help in the e-accessibility component of the e-inclusion. The aim is to develop a usable system, that is, the end-user must consider the use of this device effective, efficient and satisfactory. The research's main contribution is SINA, a hands-free interface based on computer vision techniques for motion impaired users. This interface does not require the user to use his upper body limbs, as only nose motion is considered. Besides the technical aspect, user's satisfaction when using an interface is a critical issue. The approach that we have adopted is to integrate usability evaluation at relevant points of the software developmen
Sensory Motor Remapping of Space in Human-Machine Interfaces
Studies of adaptation to patterns of deterministic forces have revealed the ability of the motor control system to form and use predictive representations of the environment. These studies have also pointed out that adaptation to novel dynamics is aimed at preserving the trajectories of a controlled endpoint, either the hand of a subject or a transported object. We review some of these experiments and present more recent studies aimed at understanding how the motor system forms representations of the physical space in which actions take place. An extensive line of investigations in visual information processing has dealt with the issue of how the Euclidean properties of space are recovered from visual signals that do not appear to possess these properties. The same question is addressed here in the context of motor behavior and motor learning by observing how people remap hand gestures and body motions that control the state of an external device. We present some theoretical considerations and experimental evidence about the ability of the nervous system to create novel patterns of coordination that are consistent with the representation of extrapersonal space. We also discuss the perspective of endowing humanâmachine interfaces with learning algorithms that, combined with human learning, may facilitate the control of powered wheelchairs and other assistive devices
Neuro-electronic technology in medicine and beyond
This dissertation looks at the technology and social issues involved with interfacing electronics directly to the human nervous system, in particular the methods for both reading and stimulating nerves. The development and use of cochlea implants is discussed, and is compared with recent developments in artificial vision. The final sections consider a future for non-medicinal applications of neuro-electronic technology. Social attitudes towards use for both medicinal and non-medicinal purposes are discussed, and the viability of use in the latter case assessed
- âŠ