34 research outputs found

    A Highly Accurate And Reliable Data Fusion Framework For Guiding The Visually Impaired

    Get PDF
    The world has approximately 285 million visually impaired (VI) people according to a report by the World Health Organization. Thirty-nine million people are estimated to be blind, whereas 246 million people are estimated to have impaired vision. An important factor that motivated this research is the fact that 90% of VI people live in developing countries. Several systems have been designed to improve the quality of the life of VI people and support the mobility of VI people. Unfortunately, none of these systems provides a complete solution for VI people, and the systems are very expensive. Therefore, this work presents an intelligent framework that includes several types of sensors embedded in a wearable device to support the visually impaired (VI) community. The proposed work is based on an integration of sensor-based and computer vision-based techniques in order to introduce an efficient and economical visual device. The designed algorithm is divided to two components: obstacle detection and collision avoidance. The system has been implemented and tested in real-time scenarios. A video dataset of 30 videos and an average of 700 frames per video was fed to the system for the testing purpose. The achieved 96.53% accuracy rate of the proposed sequence of techniques that are used for real-time detection component is based on a wide detection view that used two camera modules and a detection range of approximately 9 meters. The 98% accuracy rate was obtained for a larger dataset. However, the main contribution in this work is the proposed novel collision avoidance approach that is based on the image depth and fuzzy control rules. Through the use of x-y coordinate system, we were able to map the input frames, whereas each frame was divided into three areas vertically and further 1/3 of the height of that frame horizontally in order to specify the urgency of any existing obstacles within that frame. In addition, we were able to provide precise information to help the VI user in avoiding front obstacles using the fuzzy logic. The strength of this proposed approach is that it aids the VI users in avoiding 100% of all detected objects. Once the device is initialized, the VI user can confidently enter unfamiliar surroundings. Therefore, this implemented device can be described as accurate, reliable, friendly, light, and economically accessible that facilitates the mobility of VI people and does not require any previous knowledge of the surrounding environment. Finally, our proposed approach was compared with most efficient introduced techniques and proved to outperform them

    Head-mounted augmented reality for explainable robotic wheelchair assistance

    Get PDF
    Robotic wheelchairs with built-in assistive fea- tures, such as shared control, are an emerging means of providing independent mobility to severely disabled individuals. However, patients often struggle to build a mental model of their wheelchair’s behaviour under different environmental conditions. Motivated by the desire to help users bridge this gap in perception, we propose a novel augmented reality system using a Microsoft Hololens as a head-mounted aid for wheelchair navigation. The system displays visual feedback to the wearer as a way of explaining the underlying dynamics of the wheelchair’s shared controller and its predicted future states. To investigate the influence of different interface design options, a pilot study was also conducted. We evaluated the acceptance rate and learning curve of an immersive wheelchair training regime, revealing preliminary insights into the potential beneficial and adverse nature of different augmented reality cues for assistive navigation. In particular, we demonstrate that care should be taken in the presentation of information, with effort-reducing cues for augmented information acquisition (for example, a rear-view display) being the most appreciated

    Position control for haptic device based on discrete-time proportional integral derivative controller

    Get PDF
    Haptic devices had known as advanced technology with the goal is creating the experiences of touch by applying forces and motions to the operator based on force feedback. Especially in unmanned aerial vehicle (UAV) applications, the position of the end-effector Falcon haptic sets the velocity command for the UAV. And the operator can feel the experience vibration of the vehicle as to the acceleration or collision with other objects through a forces feedback to the haptic device. In some emergency cases, the haptic can report to the user the dangerous situation of the UAV by changing the position of the end-effector which is be obtained by changing the angle of the motor using the inverse kinematic equation. But this solution may not accurate due to the disturbance of the system. Therefore, we proposed a position controller for the haptic based on a discrete-time proportional integral derivative (PID) controller. A Novint Falcon haptic is used to demonstrate our proposal. From hardware parameters, a Jacobian matrix is calculated, which combines with the force output from the PID controller to make the torque for the motors of the haptic. The experiment was shown that the PID has high accuracy and a small error position

    Sensor-Based Assistive Devices for Visually-Impaired People: Current Status, Challenges, and Future Directions

    Get PDF
    The World Health Organization (WHO) reported that there are 285 million visually impaired people worldwide. Among these individuals, there are 39 million who are totally blind. There have been several systems designed to support visually-impaired people and to improve the quality of their lives. Unfortunately, most of these systems are limited in their capabilities. In this paper, we present a comparative survey of the wearable and portable assistive devices for visuallyimpaired people in order to show the progress in assistive technology for this group of people. Thus, the contribution of this literature survey is to discuss in detail the most significant devices that are presented in the literature to assist this population and highlight the improvements, advantages, disadvantages, and accuracy. Our aim is to address and present most of the issues of these systems to pave the way for other researchers to design devices that ensure safety and independent mobility to visually-impaired people.https://doi.org/10.3390/s1703056

    Intuitive Robot Teleoperation Based on Haptic Feedback and 3D Visualization

    Get PDF
    Robots are required in many jobs. The jobs related to tele-operation may be very challenging and often require reaching a destination quickly and with minimum collisions. In order to succeed in these jobs, human operators are asked to tele-operate a robot manually through a user interface. The design of a user interface and of the information provided in it, become therefore critical elements for the successful completion of robot tele-operation tasks. Effective and timely robot tele-navigation mainly relies on the intuitiveness provided by the interface and on the richness and presentation of the feedback given. This project investigated the use of both haptic and visual feedbacks in a user interface for robot tele-navigation. The aim was to overcome some of the limitations observed in a state of the art works, turning what is sometimes described as contrasting into an added value to improve tele-navigation performance. The key issue is to combine different human sensory modalities in a coherent way and to benefit from 3-D vision too. The proposed new approach was inspired by how visually impaired people use walking sticks to navigate. Haptic feedback may provide helpful input to a user to comprehend distances to surrounding obstacles and information about the obstacle distribution. This was proposed to be achieved entirely relying on on-board range sensors, and by processing this input through a simple scheme that regulates magnitude and direction of the environmental force-feedback provided to the haptic device. A specific algorithm was also used to render the distribution of very close objects to provide appropriate touch sensations. Scene visualization was provided by the system and it was shown to a user coherently to haptic sensation. Different visualization configurations, from multi-viewpoint observation to 3-D visualization, were proposed and rigorously assessed through experimentations, to understand the advantages of the proposed approach and performance variations among different 3-D display technologies. Over twenty users were invited to participate in a usability study composed by two major experiments. The first experiment focused on a comparison between the proposed haptic-feedback strategy and a typical state of the art approach. It included testing with a multi-viewpoint visual observation. The second experiment investigated the performance of the proposed haptic-feedback strategy when combined with three different stereoscopic-3D visualization technologies. The results from the experiments were encouraging and showed good performance with the proposed approach and an improvement over literature approaches to haptic feedback in robot tele-operation. It was also demonstrated that 3-D visualization can be beneficial for robot tele-navigation and it will not contrast with haptic feedback if it is properly aligned to it. Performance may vary with different 3-D visualization technologies, which is also discussed in the presented work

    VFH+ based shared control for remotely operated mobile robots

    Full text link
    This paper addresses the problem of safe and efficient navigation in remotely controlled robots operating in hazardous and unstructured environments; or conducting other remote robotic tasks. A shared control method is presented which blends the commands from a VFH+ obstacle avoidance navigation module with the teleoperation commands provided by an operator via a joypad. The presented approach offers several advantages such as flexibility allowing for a straightforward adaptation of the controller's behaviour and easy integration with variable autonomy systems; as well as the ability to cope with dynamic environments. The advantages of the presented controller are demonstrated by an experimental evaluation in a disaster response scenario. More specifically, presented evidence show a clear performance increase in terms of safety and task completion time compared to a pure teleoperation approach, as well as an ability to cope with previously unobserved obstacles.Comment: 8 pages,6 figure

    Investigation into the use of the Microsoft Kinect and the Hough transform for mobile robotics

    Get PDF
    Includes bibliographical references.The Microsoft Kinect sensor is a low cost RGB-D sensor. In this dissertation, its calibration is fully investigated and then these parameters are compared to the parameters given by Microsoft and OpenNI. The parameters found were found to be different to those given by Microsoft and OpenNI therefore, every Kinect should be fully calibrated. The transformation from the raw data to a point cloud is also investigated. Then, the Hough transform is presented in its 2-dimensional form. The Hough transform is a line extraction algorithm which uses a voting system. It is then compared to the Split-and-Merge algorithm using laser range _nder data. The Hough transform is found to compare well to the Split-and-Merge in 2 dimensions. Finally, the Hough transform is extended into 3-dimensions for use with the Kinect sensor. It was found that pre-processing of the Kinect data was necessary to reduce the number of points input into the Hough transform. Three edge detectors are used - the LoG, Canny and Sobel edge detectors. These were compared, and the Sobel detector was found to be the best. The _nal process was then used in multiple ways - _rst to determine its speed. Its accuracy was then investigated. It was found that the planes extracted were very inaccurate, and therefore not suitable for obstacle avoidance in mobile robotics. The suitability of the process for SLAM was also investigated. It was found to be unsuitable, as planar environments did not have distinct features which could be tracked, whilst the complex environment was not planar, and therefore the Hough transform would not work

    Smart Rollators Aid Devices: Current Trends and Challenges.

    Get PDF
    Mobility loss has a major impact on autonomy. Smart rollators have been proposed to enhance human abilities when conventional devices are not enough. Many human-robot interaction systems have been proposed in the last decade in this area. Comparative analysis shows that mechanical issues aside, they mainly differ in first, equipped sensors and actuators; second, input interface; third, operation modes, and fourth adaptation capabilities. This article presents a review and a tentative taxonomy of approaches during the last 6 years. In total, 92 papers have been reviewed. We have discarded works not focused on humanrobot interaction or focused only on mechanical adaptation. A critical analysis is provided after the review and classification, highlighting systems tested with their target population.This work was supported by the the Spanish project RTI2018-096701-B-C21 and the Swedish Knowledge Foun dation (KKS) through the research profile Embedded Sensor Systems for Health Plus (ESS−H+) at Malardalen University, ¨ Sweden

    Virtual reality based multi-modal teleoperation using mixed autonomy

    Get PDF
    The thesis presents a multi modal teleoperation interface featuring an integrated virtual reality based simulation aumented by sensors and image processing capabilities onboard the remotely operated vehicle. The virtual reality interface fuses an existing VR model with live video feed and prediction states, thereby creating a multi modal control interface. Virtual reality addresses the typical limitations of video-based teleoperation caused by signal lag and limited field of view thereby allowing the operator to navigate in a continuous fashion. The vehicle incorporates an on-board computer and a stereo vision system to facilitate obstacle detection. A vehicle adaptation system with a priori risk maps and real state tracking system enables temporary autonomous operation of the vehicle for local navigation around obstacles and automatic re-establishment of the vehicle\u27s teleoperated state. As both the vehicle and the operator share absolute autonomy in stages, the operation is referred to as mixed autonomous. Finally, the system provides real time update of the virtual environment based on anomalies encountered by the vehicle. The system effectively balances the autonomy between the human operator and on board vehicle intelligence. The reliability results of individual components along with overall system implementation and the results of the user study helps show that the VR based multi modal teleoperation interface is more adaptable and intuitive when compared to other interfaces
    corecore