20,616 research outputs found
In-home and remote use of robotic body surrogates by people with profound motor deficits
By controlling robots comparable to the human body, people with profound
motor deficits could potentially perform a variety of physical tasks for
themselves, improving their quality of life. The extent to which this is
achievable has been unclear due to the lack of suitable interfaces by which to
control robotic body surrogates and a dearth of studies involving substantial
numbers of people with profound motor deficits. We developed a novel, web-based
augmented reality interface that enables people with profound motor deficits to
remotely control a PR2 mobile manipulator from Willow Garage, which is a
human-scale, wheeled robot with two arms. We then conducted two studies to
investigate the use of robotic body surrogates. In the first study, 15 novice
users with profound motor deficits from across the United States controlled a
PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a
simulated self-care task. Participants achieved clinically meaningful
improvements on the ARAT and 12 of 15 participants (80%) successfully completed
the simulated self-care task. Participants agreed that the robotic system was
easy to use, was useful, and would provide a meaningful improvement in their
lives. In the second study, one expert user with profound motor deficits had
free use of a PR2 in his home for seven days. He performed a variety of
self-care and household tasks, and also used the robot in novel ways. Taking
both studies together, our results suggest that people with profound motor
deficits can improve their quality of life using robotic body surrogates, and
that they can gain benefit with only low-level robot autonomy and without
invasive interfaces. However, methods to reduce the rate of errors and increase
operational speed merit further investigation.Comment: 43 Pages, 13 Figure
Towards transparent telepresence
It is proposed that the concept of transparent telepresence can be closely approached through high fidelity technological mediation. It is argued that the matching of the system capabilities to those of the human user will yield a strong sense of immersion and presence at a remote site. Some applications of such a system are noted. The concept is explained and critical system elements are described together with an overview of some of the necessary system specifications
Layout of Multiple Views for Volume Visualization: A User Study
Abstract. Volume visualizations can have drastically different appearances when viewed using a variety of transfer functions. A problem then occurs in trying to organize many different views on one screen. We conducted a user study of four layout techniques for these multiple views. We timed participants as they separated different aspects of volume data for both time-invariant and time-variant data using one of four different layout schemes. The layout technique had no impact on performance when used with time-invariant data. With time-variant data, however, the multiple view layouts all resulted in better times than did a single view interface. Surprisingly, different layout techniques for multiple views resulted in no noticeable difference in user performance. In this paper, we describe our study and present the results, which could be used in the design of future volume visualization software to improve the productivity of the scientists who use it
Games and Brain-Computer Interfaces: The State of the Art
BCI gaming is a very young field; most games are proof-of-concepts. Work that compares BCIs in a game environments with traditional BCIs indicates no negative effects, or even a positive effect of the rich visual environments on the performance. The low transfer-rate of current games poses a problem for control of a game. This is often solved by changing the goal of the game. Multi-modal input with BCI forms an promising solution, as does assigning more meaningful functionality to BCI control
Why bad ideas are a good idea
What would happen if we wrote an Abstract that was the exact opposite of what the paper described? This is a bad idea,
but it makes us think more carefully than usual about properties of Abstracts. This paper describes BadIdeas, a collection
of techniques that uses ???bad??? or ???silly??? ideas to inspire creativity, explore design domains and teach critical thinking in
interaction design. We describe the approach, some evidence, how it is performed in practice and experience in its use.published or submitted for publicationis peer reviewe
Teegi: Tangible EEG Interface
We introduce Teegi, a Tangible ElectroEncephaloGraphy (EEG) Interface that
enables novice users to get to know more about something as complex as brain
signals, in an easy, en- gaging and informative way. To this end, we have
designed a new system based on a unique combination of spatial aug- mented
reality, tangible interaction and real-time neurotech- nologies. With Teegi, a
user can visualize and analyze his or her own brain activity in real-time, on a
tangible character that can be easily manipulated, and with which it is
possible to interact. An exploration study has shown that interacting with
Teegi seems to be easy, motivating, reliable and infor- mative. Overall, this
suggests that Teegi is a promising and relevant training and mediation tool for
the general public.Comment: to appear in UIST-ACM User Interface Software and Technology
Symposium, Oct 2014, Honolulu, United State
Music Maker – A Camera-based Music Making Tool for Physical Rehabilitation
The therapeutic effects of playing music are being recognized increasingly in the field of rehabilitation medicine. People with physical disabilities, however, often do not have the motor dexterity needed to play an instrument. We developed a camera-based human-computer interface called "Music Maker" to provide such people with a means to make music by performing therapeutic exercises. Music Maker uses computer vision techniques to convert the movements of a patient's body part, for example, a finger, hand, or foot, into musical and visual feedback using the open software platform EyesWeb. It can be adjusted to a patient's particular therapeutic needs and provides quantitative tools for monitoring the recovery process and assessing therapeutic outcomes. We tested the potential of Music Maker as a rehabilitation tool with six subjects who responded to or created music in various movement exercises. In these proof-of-concept experiments, Music Maker has performed reliably and shown its promise as a therapeutic device.National Science Foundation (IIS-0308213, IIS-039009, IIS-0093367, P200A01031, EIA-0202067 to M.B.); National Institutes of Health (DC-03663 to E.S.); Boston University (Dudley Allen Sargent Research Fund (to A.L.)
Design Strategies for Playful Technologies to Support Light-intensity Physical Activity in the Workplace
Moderate to vigorous intensity physical activity has an established
preventative role in obesity, cardiovascular disease, and diabetes. However
recent evidence suggests that sitting time affects health negatively
independent of whether adults meet prescribed physical activity guidelines.
Since many of us spend long hours daily sitting in front of a host of
electronic screens, this is cause for concern. In this paper, we describe a set
of three prototype digital games created for encouraging light-intensity
physical activity during short breaks at work. The design of these kinds of
games is a complex process that must consider motivation strategies,
interaction methodology, usability and ludic aspects. We present design
guidelines for technologies that encourage physical activity in the workplace
that we derived from a user evaluation using the prototypes. Although the
design guidelines can be seen as general principles, we conclude that they have
to be considered differently for different workplace cultures and workspaces.
Our study was conducted with users who have some experience playing casual
games on their mobile devices and were able and willing to increase their
physical activity.Comment: 11 pages, 5 figures. Video:
http://living.media.mit.edu/projects/see-saw
I Am The Passenger: How Visual Motion Cues Can Influence Sickness For In-Car VR
This paper explores the use of VR Head Mounted Displays
(HMDs) in-car and in-motion for the first time. Immersive
HMDs are becoming everyday consumer items and, as they
offer new possibilities for entertainment and productivity, people
will want to use them during travel in, for example, autonomous
cars. However, their use is confounded by motion
sickness caused in-part by the restricted visual perception
of motion conflicting with physically perceived vehicle motion
(accelerations/rotations detected by the vestibular system).
Whilst VR HMDs restrict visual perception of motion, they
could also render it virtually, potentially alleviating sensory
conflict. To study this problem, we conducted the first on-road
and in motion study to systematically investigate the effects
of various visual presentations of the real-world motion of
a car on the sickness and immersion of VR HMD wearing
passengers. We established new baselines for VR in-car motion
sickness, and found that there is no one best presentation
with respect to balancing sickness and immersion. Instead,
user preferences suggest different solutions are required for
differently susceptible users to provide usable VR in-car. This
work provides formative insights for VR designers and an entry
point for further research into enabling use of VR HMDs,
and the rich experiences they offer, when travelling
- …