14,478 research outputs found
This Far, No Further: Introducing Virtual Borders to Mobile Robots Using a Laser Pointer
We address the problem of controlling the workspace of a 3-DoF mobile robot.
In a human-robot shared space, robots should navigate in a human-acceptable way
according to the users' demands. For this purpose, we employ virtual borders,
that are non-physical borders, to allow a user the restriction of the robot's
workspace. To this end, we propose an interaction method based on a laser
pointer to intuitively define virtual borders. This interaction method uses a
previously developed framework based on robot guidance to change the robot's
navigational behavior. Furthermore, we extend this framework to increase the
flexibility by considering different types of virtual borders, i.e. polygons
and curves separating an area. We evaluated our method with 15 non-expert users
concerning correctness, accuracy and teaching time. The experimental results
revealed a high accuracy and linear teaching time with respect to the border
length while correctly incorporating the borders into the robot's navigational
map. Finally, our user study showed that non-expert users can employ our
interaction method.Comment: Accepted at 2019 Third IEEE International Conference on Robotic
Computing (IRC), supplementary video: https://youtu.be/lKsGp8xtyI
A Framework for Interactive Teaching of Virtual Borders to Mobile Robots
The increasing number of robots in home environments leads to an emerging
coexistence between humans and robots. Robots undertake common tasks and
support the residents in their everyday life. People appreciate the presence of
robots in their environment as long as they keep the control over them. One
important aspect is the control of a robot's workspace. Therefore, we introduce
virtual borders to precisely and flexibly define the workspace of mobile
robots. First, we propose a novel framework that allows a person to
interactively restrict a mobile robot's workspace. To show the validity of this
framework, a concrete implementation based on visual markers is implemented.
Afterwards, the mobile robot is capable of performing its tasks while
respecting the new virtual borders. The approach is accurate, flexible and less
time consuming than explicit robot programming. Hence, even non-experts are
able to teach virtual borders to their robots which is especially interesting
in domains like vacuuming or service robots in home environments.Comment: 7 pages, 6 figure
Novel Approach for User-Friendly Home Automation System on One Touch
The home Automation has been showing from last 10 to 15 years its far reaching and crucial importance in domestic and industrial world. The paper represents a research and development of an embedded project of Home Automation. There are many "already existing” home automation system which are also based on the "Central Unit" which controls everything but are quite expensive and less user-friendly. The main purpose of this project is to reduce the cost of the system providing a better interface which user-friendly as well. The basic idea of this the system is to have a server as the central controller which has multiple clients and is connected to the mobile device which is at remote location which sends the query to access the home devices and is also connected to circuitry present at the home using wireless connection(Zigbee Interface). In the case the system consist of the replicas of the home which is to be automated. It also has an asset of a Real time System
DOI: 10.17762/ijritcc2321-8169.15014
In-home and remote use of robotic body surrogates by people with profound motor deficits
By controlling robots comparable to the human body, people with profound
motor deficits could potentially perform a variety of physical tasks for
themselves, improving their quality of life. The extent to which this is
achievable has been unclear due to the lack of suitable interfaces by which to
control robotic body surrogates and a dearth of studies involving substantial
numbers of people with profound motor deficits. We developed a novel, web-based
augmented reality interface that enables people with profound motor deficits to
remotely control a PR2 mobile manipulator from Willow Garage, which is a
human-scale, wheeled robot with two arms. We then conducted two studies to
investigate the use of robotic body surrogates. In the first study, 15 novice
users with profound motor deficits from across the United States controlled a
PR2 in Atlanta, GA to perform a modified Action Research Arm Test (ARAT) and a
simulated self-care task. Participants achieved clinically meaningful
improvements on the ARAT and 12 of 15 participants (80%) successfully completed
the simulated self-care task. Participants agreed that the robotic system was
easy to use, was useful, and would provide a meaningful improvement in their
lives. In the second study, one expert user with profound motor deficits had
free use of a PR2 in his home for seven days. He performed a variety of
self-care and household tasks, and also used the robot in novel ways. Taking
both studies together, our results suggest that people with profound motor
deficits can improve their quality of life using robotic body surrogates, and
that they can gain benefit with only low-level robot autonomy and without
invasive interfaces. However, methods to reduce the rate of errors and increase
operational speed merit further investigation.Comment: 43 Pages, 13 Figure
GUI system for Elders/Patients in Intensive Care
In the old age, few people need special care if they are suffering from
specific diseases as they can get stroke while they are in normal life routine.
Also patients of any age, who are not able to walk, need to be taken care of
personally but for this, either they have to be in hospital or someone like
nurse should be with them for better care. This is costly in terms of money and
man power. A person is needed for 24x7 care of these people. To help in this
aspect we purposes a vision based system which will take input from the patient
and will provide information to the specified person, who is currently may not
in the patient room. This will reduce the need of man power, also a continuous
monitoring would not be needed. The system is using MS Kinect for gesture
detection for better accuracy and this system can be installed at home or
hospital easily. The system provides GUI for simple usage and gives visual and
audio feedback to user. This system work on natural hand interaction and need
no training before using and also no need to wear any glove or color strip.Comment: In proceedings of the 4th IEEE International Conference on
International Technology Management Conference, Chicago, IL USA, 12-15 June,
201
Recommended from our members
Human-display interaction technology: Emerging remote interfaces for pervasive display environments
This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.We're living in a world where information processing isn't confined to desktop computers - it's being integrated into everyday objects and activities. Pervasive computation is human centered: it permeates our physical world, helping us achieve goals and fulfill our needs with minimum effort by exploiting natural interaction styles. Remote interaction with screen displays requires a sensor-based, multimodal, touchless approach. For example, by processing user hand gestures, this paradigm removes constraints requiring physical contact and permits natural interaction with tangible digital information. Such touchless interaction can be multimodal, exploiting the visual, auditory, and olfactory senses.Ministerio de Educación y Ciencia and Amper Sistemas, SA
Robots for Exploration, Digital Preservation and Visualization of Archeological Sites
Monitoring and conservation of archaeological sites
are important activities necessary to prevent damage or to
perform restoration on cultural heritage. Standard techniques,
like mapping and digitizing, are typically used to document the
status of such sites. While these task are normally accomplished
manually by humans, this is not possible when dealing with
hard-to-access areas. For example, due to the possibility of
structural collapses, underground tunnels like catacombs are
considered highly unstable environments. Moreover, they are full
of radioactive gas radon that limits the presence of people only
for few minutes. The progress recently made in the artificial
intelligence and robotics field opened new possibilities for mobile
robots to be used in locations where humans are not allowed
to enter. The ROVINA project aims at developing autonomous
mobile robots to make faster, cheaper and safer the monitoring of
archaeological sites. ROVINA will be evaluated on the catacombs
of Priscilla (in Rome) and S. Gennaro (in Naples)
- …