649 research outputs found
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
State of the art review on walking support system for visually impaired people
The technology for terrain detection and walking support system for blind people has
rapidly been improved the last couple of decades but to assist visually impaired people may have
started long ago. Currently, a variety of portable or wearable navigation system is available in the
market to help the blind for navigating their way in his local or remote area. The focused
category in this work can be subgroups as electronic travel aids (ETAs), electronic orientation
aids (EOAs) and position locator devices (PLDs). However, we will focus mainly on electronic
travel aids (ETAs). This paper presents a comparative survey among the various portable or
wearable walking support systems as well as informative description (a subcategory of ETAs or
early stages of ETAs) with its working principal advantages and disadvantages so that the
researchers can easily get the current stage of assisting blind technology along with the
requirement for optimising the design of walking support system for its users
Recommended from our members
Mobile assistive technologies for the visually impaired
There are around 285 million visually impaired people worldwide, and around 370,000 people are registered as blind or partially sighted in the UK. Ongoing advances in information technology (IT) are increasing the scope for IT-based mobile assistive technologies to facilitate the independence, safety, and improved quality of life of the visually impaired. Research is being directed at making mobile phones and other handheld devices accessible via our haptic (touch) and audio sensory channels. We review research and innovation within the field of mobile assistive technology for the visually impaired and, in so doing, highlight the need for successful collaboration between clinical expertise, computer science, and domain users to realize fully the potential benefits of such technologies. We initially reflect on research that has been conducted to make mobile phones more accessible to people with vision loss. We then discuss innovative assistive applications designed for the visually impaired that are either delivered via mainstream devices and can be used while in motion (e.g., mobile phones) or are embedded within an environment that may be in motion (e.g., public transport) or within which the user may be in motion (e.g., smart homes)
Haptic Interaction with a Guide Robot in Zero Visibility
Search and rescue operations are often undertaken in dark and noisy environment in which rescue team must rely on haptic feedback for exploration and safe exit. However, little attention has been paid specifically to haptic sensitivity in such contexts or the possibility of enhancing communicational proficiency in the haptic mode as a life-preserving measure. The potential of root swarms for search and rescue has been shown by the Guardians project (EU, 2006-2010); however the project also showed the problem of human robot interaction in smoky (non-visibility) and noisy conditions. The REINS project (UK, 2011-2015) focused on human robot interaction in such conditions. This research is a body of work (done as a part of he REINS project) which investigates the haptic interaction of a person wit a guide robot in zero visibility. The thesis firstly reflects upon real world scenarios where people make use of the haptic sense to interact in zero visibility (such as interaction among firefighters and symbiotic relationship between visually impaired people and guide dogs). In addition, it reflects on the sensitivity and trainability of the haptic sense, to be used for the interaction. The thesis presents an analysis and evaluation of the design of a physical interface (Designed by the consortium of the REINS project) connecting the human and the robotic guide in poor visibility conditions. Finally, it lays a foundation for the design of test cases to evaluate human robot haptic interaction, taking into consideration the two aspects of the interaction, namely locomotion guidance and environmental exploration
Portable Robotic Navigation Aid for the Visually Impaired
This dissertation aims to address the limitations of existing visual-inertial (VI) SLAM methods - lack of needed robustness and accuracy - for assistive navigation in a large indoor space. Several improvements are made to existing SLAM technology, and the improved methods are used to enable two robotic assistive devices, a robot cane, and a robotic object manipulation aid, for the visually impaired for assistive wayfinding and object detection/grasping. First, depth measurements are incorporated into the optimization process for device pose estimation to improve the success rate of VI SLAM\u27s initialization and reduce scale drift. The improved method, called depth-enhanced visual-inertial odometry (DVIO), initializes itself immediately as the environment\u27s metric scale can be derived from the depth data. Second, a hybrid PnP (perspective n-point) method is introduced for a more accurate estimation of the pose change between two camera frames by using the 3D data from both frames. Third, to implement DVIO on a smartphone with variable camera intrinsic parameters (CIP), a method called CIP-VMobile is devised to simultaneously estimate the intrinsic parameters and motion states of the camera. CIP-VMobile estimates in real time the CIP, which varies with the smartphone\u27s pose due to the camera\u27s optical image stabilization mechanism, resulting in more accurate device pose estimates. Various experiments are performed to validate the VI-SLAM methods with the two robotic assistive devices.
Beyond these primary objectives, SM-SLAM is proposed as a potential extension for the existing SLAM methods in dynamic environments. This forward-looking exploration is premised on the potential that incorporating dynamic object detection capabilities in the front-end could improve SLAM\u27s overall accuracy and robustness. Various experiments have been conducted to validate the efficacy of this newly proposed method, using both public and self-collected datasets. The results obtained substantiate the viability of this innovation, leaving a deeper investigation for future work
Enhancing trust and confidence in human robot interaction
We investigate human robot interaction under no-visibility conditions. A major pre-condition for successful human-robot cooperation in these circumstances is that the human trusts and has confidence in the robot. In order to enhance human trust and confidence we have to make design choices that impact on a number of ethical issues. We also look at the interaction between a visual impaired person and a guide dog for clues to enhance confidence. The interaction consists of mixed initiative and the guide dog does not have full navigation responsibilities. This model seems also appropriate for human-robot interaction and might in addition be a useful example regarding evaluating the ethical issues of human robot interaction
An Optimal State Dependent Haptic Guidance Controller via a Hard Rein
The aim of this paper is to improve the optimality
and accuracy of techniques to guide a human in limited visibility & auditory conditions such as in fire-fighting in warehouses or similar environments. At present, teams of breathing apparatus (BA) wearing fire-fighters move in teams following walls. Due to limited visibility and high noise in the oxygen masks, they predominantly depend on haptic communication through reins.
An intelligent agent (man/machine) with full environment perceptual capabilities is an alternative to enhance navigation in such unfavorable environments, just like a dog guiding a blind person. This paper proposes an optimal state-dependent control policy to guide a follower with limited environmental perception, by an intelligent and environmentally perceptive agent. Based on experimental systems identification and numerical simulations on
human demonstrations from eight pairs of participants, we show that the guiding agent and the follower experience learning for a optimal stable state-dependent a novel 3rd and 2nd order auto regressive predictive and reactive control policies respectively.
Our findings provide a novel theoretical basis to design advanced human-robot interaction algorithms in a variety of cases that require the assistance of a robot to perceive the environment by a human counterpart
Real-Time Obstacle Detection System in Indoor Environment for the Visually Impaired Using Microsoft Kinect Sensor
Any mobility aid for the visually impaired people should be able to accurately detect and warn about nearly obstacles. In this paper, we present a method for support system to detect obstacle in indoor environment based on Kinect sensor and 3D-image processing. Color-Depth data of the scene in front of the user is collected using the Kinect with the support of the standard framework for 3D sensing OpenNI and processed by PCL library to extract accurate 3D information of the obstacles. The experiments have been performed with the dataset in multiple indoor scenarios and in different lighting conditions. Results showed that our system is able to accurately detect the four types of obstacle: walls, doors, stairs, and a residual class that covers loose obstacles on the floor. Precisely, walls and loose obstacles on the floor are detected in practically all cases, whereas doors are detected in 90.69% out of 43 positive image samples. For the step detection, we have correctly detected the upstairs in 97.33% out of 75 positive images while the correct rate of downstairs detection is lower with 89.47% from 38 positive images. Our method further allows the computation of the distance between the user and the obstacles
- …