19,132 research outputs found
Rehabilitative devices for a top-down approach
In recent years, neurorehabilitation has moved from a "bottom-up" to a "top down" approach. This change has also involved the technological devices developed for motor and cognitive rehabilitation. It implies that during a task or during therapeutic exercises, new "top-down" approaches are being used to stimulate the brain in a more direct way to elicit plasticity-mediated motor re-learning. This is opposed to "Bottom up" approaches, which act at the physical level and attempt to bring about changes at the level of the central neural system. Areas covered: In the present unsystematic review, we present the most promising innovative technological devices that can effectively support rehabilitation based on a top-down approach, according to the most recent neuroscientific and neurocognitive findings. In particular, we explore if and how the use of new technological devices comprising serious exergames, virtual reality, robots, brain computer interfaces, rhythmic music and biofeedback devices might provide a top-down based approach. Expert commentary: Motor and cognitive systems are strongly harnessed in humans and thus cannot be separated in neurorehabilitation. Recently developed technologies in motor-cognitive rehabilitation might have a greater positive effect than conventional therapies
This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer
We address the problem of controlling the workspace of a 3-DoF mobile robot.
In a human-robot shared space, robots should navigate in a human-acceptable way
according to the users' demands. For this purpose, we employ virtual borders,
that are non-physical borders, to allow a user the restriction of the robot's
workspace. To this end, we propose an interaction method based on a laser
pointer to intuitively define virtual borders. This interaction method uses a
previously developed framework based on robot guidance to change the robot's
navigational behavior. Furthermore, we extend this framework to increase the
flexibility by considering different types of virtual borders, i.e. polygons
and curves separating an area. We evaluated our method with 15 non-expert users
concerning correctness, accuracy and teaching time. The experimental results
revealed a high accuracy and linear teaching time with respect to the border
length while correctly incorporating the borders into the robot's navigational
map. Finally, our user study showed that non-expert users can employ our
interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic
Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI
Interactively Picking Real-World Objects with Unconstrained Spoken Language Instructions
Comprehension of spoken natural language is an essential component for robots
to communicate with human effectively. However, handling unconstrained spoken
instructions is challenging due to (1) complex structures including a wide
variety of expressions used in spoken language and (2) inherent ambiguity in
interpretation of human instructions. In this paper, we propose the first
comprehensive system that can handle unconstrained spoken language and is able
to effectively resolve ambiguity in spoken instructions. Specifically, we
integrate deep-learning-based object detection together with natural language
processing technologies to handle unconstrained spoken instructions, and
propose a method for robots to resolve instruction ambiguity through dialogue.
Through our experiments on both a simulated environment as well as a physical
industrial robot arm, we demonstrate the ability of our system to understand
natural instructions from human operators effectively, and how higher success
rates of the object picking task can be achieved through an interactive
clarification process.Comment: 9 pages. International Conference on Robotics and Automation (ICRA)
2018. Accompanying videos are available at the following links:
https://youtu.be/_Uyv1XIUqhk (the system submitted to ICRA-2018) and
http://youtu.be/DGJazkyw0Ws (with improvements after ICRA-2018 submission
A Comparison of Visualisation Methods for Disambiguating Verbal Requests in Human-Robot Interaction
Picking up objects requested by a human user is a common task in human-robot
interaction. When multiple objects match the user's verbal description, the
robot needs to clarify which object the user is referring to before executing
the action. Previous research has focused on perceiving user's multimodal
behaviour to complement verbal commands or minimising the number of follow up
questions to reduce task time. In this paper, we propose a system for reference
disambiguation based on visualisation and compare three methods to disambiguate
natural language instructions. In a controlled experiment with a YuMi robot, we
investigated real-time augmentations of the workspace in three conditions --
mixed reality, augmented reality, and a monitor as the baseline -- using
objective measures such as time and accuracy, and subjective measures like
engagement, immersion, and display interference. Significant differences were
found in accuracy and engagement between the conditions, but no differences
were found in task time. Despite the higher error rates in the mixed reality
condition, participants found that modality more engaging than the other two,
but overall showed preference for the augmented reality condition over the
monitor and mixed reality conditions
Creating Interaction Scenarios With a New Graphical User Interface
The field of human-centered computing has known a major progress these past
few years. It is admitted that this field is multidisciplinary and that the
human is the core of the system. It shows two matters of concern:
multidisciplinary and human. The first one reveals that each discipline plays
an important role in the global research and that the collaboration between
everyone is needed. The second one explains that a growing number of researches
aims at making the human commitment degree increase by giving him/her a
decisive role in the human-machine interaction. This paper focuses on these
both concerns and presents MICE (Machines Interaction Control in their
Environment) which is a system where the human is the one who makes the
decisions to manage the interaction with the machines. In an ambient context,
the human can decide of objects actions by creating interaction scenarios with
a new visual programming language: scenL.Comment: 5th International Workshop on Intelligent Interfaces for
Human-Computer Interaction, Palerme : Italy (2012
- …