8,607 research outputs found

    Viia-hand: a Reach-and-grasp Restoration System Integrating Voice interaction, Computer vision and Auditory feedback for Blind Amputees

    Full text link
    Visual feedback plays a crucial role in the process of amputation patients completing grasping in the field of prosthesis control. However, for blind and visually impaired (BVI) amputees, the loss of both visual and grasping abilities makes the "easy" reach-and-grasp task a feasible challenge. In this paper, we propose a novel multi-sensory prosthesis system helping BVI amputees with sensing, navigation and grasp operations. It combines modules of voice interaction, environmental perception, grasp guidance, collaborative control, and auditory/tactile feedback. In particular, the voice interaction module receives user instructions and invokes other functional modules according to the instructions. The environmental perception and grasp guidance module obtains environmental information through computer vision, and feedbacks the information to the user through auditory feedback modules (voice prompts and spatial sound sources) and tactile feedback modules (vibration stimulation). The prosthesis collaborative control module obtains the context information of the grasp guidance process and completes the collaborative control of grasp gestures and wrist angles of prosthesis in conjunction with the user's control intention in order to achieve stable grasp of various objects. This paper details a prototyping design (named viia-hand) and presents its preliminary experimental verification on healthy subjects completing specific reach-and-grasp tasks. Our results showed that, with the help of our new design, the subjects were able to achieve a precise reach and reliable grasp of the target objects in a relatively cluttered environment. Additionally, the system is extremely user-friendly, as users can quickly adapt to it with minimal training

    Sample-Efficient Training of Robotic Guide Using Human Path Prediction Network

    Full text link
    Training a robot that engages with people is challenging, because it is expensive to involve people in a robot training process requiring numerous data samples. This paper proposes a human path prediction network (HPPN) and an evolution strategy-based robot training method using virtual human movements generated by the HPPN, which compensates for this sample inefficiency problem. We applied the proposed method to the training of a robotic guide for visually impaired people, which was designed to collect multimodal human response data and reflect such data when selecting the robot's actions. We collected 1,507 real-world episodes for training the HPPN and then generated over 100,000 virtual episodes for training the robot policy. User test results indicate that our trained robot accurately guides blindfolded participants along a goal path. In addition, by the designed reward to pursue both guidance accuracy and human comfort during the robot policy training process, our robot leads to improved smoothness in human motion while maintaining the accuracy of the guidance. This sample-efficient training method is expected to be widely applicable to all robots and computing machinery that physically interact with humans

    SLAM for Visually Impaired People: A Survey

    Full text link
    In recent decades, several assistive technologies for visually impaired and blind (VIB) people have been developed to improve their ability to navigate independently and safely. At the same time, simultaneous localization and mapping (SLAM) techniques have become sufficiently robust and efficient to be adopted in the development of assistive technologies. In this paper, we first report the results of an anonymous survey conducted with VIB people to understand their experience and needs; we focus on digital assistive technologies that help them with indoor and outdoor navigation. Then, we present a literature review of assistive technologies based on SLAM. We discuss proposed approaches and indicate their pros and cons. We conclude by presenting future opportunities and challenges in this domain.Comment: 26 pages, 5 tables, 3 figure

    A survey on computer vision technology in Camera Based ETA devices

    Get PDF
    Electronic Travel Aid systems are expected to make impaired persons able to perform their everyday tasks such as finding an object and avoiding obstacles easier. Among ETA devices, Camera Based ETA devices are the new one and with a high potential for helping Visually Impaired Persons. With recent advances in computer science and specially computer vision, Camera Based ETA devices used several computer vision algorithms and techniques such as object recognition and stereo vision in order to help VIP to perform tasks such as reading banknotes, recognizing people and avoiding obstacles. This paper analyses and appraises a number of literatures in this area with focus on stereo vision technique. Finally, after discussing about the methods and techniques used in different literatures, it is concluded that the stereo vision is the best technique for helping VIP in their everyday navigation

    An aesthetics of touch: investigating the language of design relating to form

    Get PDF
    How well can designers communicate qualities of touch? This paper presents evidence that they have some capability to do so, much of which appears to have been learned, but at present make limited use of such language. Interviews with graduate designer-makers suggest that they are aware of and value the importance of touch and materiality in their work, but lack a vocabulary to fully relate to their detailed explanations of other aspects such as their intent or selection of materials. We believe that more attention should be paid to the verbal dialogue that happens in the design process, particularly as other researchers show that even making-based learning also has a strong verbal element to it. However, verbal language alone does not appear to be adequate for a comprehensive language of touch. Graduate designers-makers’ descriptive practices combined non-verbal manipulation within verbal accounts. We thus argue that haptic vocabularies do not simply describe material qualities, but rather are situated competences that physically demonstrate the presence of haptic qualities. Such competencies are more important than groups of verbal vocabularies in isolation. Design support for developing and extending haptic competences must take this wide range of considerations into account to comprehensively improve designers’ capabilities

    Wearable obstacle avoidance electronic travel aids for blind and visually impaired individuals : a systematic review

    Get PDF
    Background Wearable obstacle avoidance electronic travel aids (ETAs) have been developed to assist the safe displacement of blind and visually impaired individuals (BVIs) in indoor/outdoor spaces. This systematic review aimed to understand the strengths and weaknesses of existing ETAs in terms of hardware functionality, cost, and user experience. These elements may influence the usability of the ETAs and are valuable in guiding the development of superior ETAs in the future. Methods Formally published studies designing and developing the wearable obstacle avoidance ETAs were searched for from six databases from their inception to April 2023. The PRISMA 2020 and APISSER guidelines were followed. Results Eighty-nine studies were included for analysis, 41 of which were judged to be of moderate to high quality. Most wearable obstacle avoidance ETAs mainly depend on camera- and ultrasonic-based techniques to achieve perception of the environment. Acoustic feedback was the most common human-computer feedback form used by the ETAs. According to user experience, the efficacy and safety of the device was usually their primary concern. Conclusions Although many conceptualised ETAs have been designed to facilitate BVIs' independent navigation, most of these devices suffer from shortcomings. This is due to the nature and limitations of the various processors, environment detection techniques and human-computer feedback those ETAs are equipped with. Integrating multiple techniques and hardware into one ETA is a way to improve performance, but there is still a need to address the discomfort of wearing the device and the high-cost. Developing an applicable systematic review guideline along with a credible quality assessment tool for these types of studies is also required. © 2013 IEEE

    Accessible Autonomy: Exploring Inclusive Autonomous Vehicle Design and Interaction for People who are Blind and Visually Impaired

    Get PDF
    Autonomous vehicles are poised to revolutionize independent travel for millions of people experiencing transportation-limiting visual impairments worldwide. However, the current trajectory of automotive technology is rife with roadblocks to accessible interaction and inclusion for this demographic. Inaccessible (visually dependent) interfaces and lack of information access throughout the trip are surmountable, yet nevertheless critical barriers to this potentially lifechanging technology. To address these challenges, the programmatic dissertation research presented here includes ten studies, three published papers, and three submitted papers in high impact outlets that together address accessibility across the complete trip of transportation. The first paper began with a thorough review of the fully autonomous vehicle (FAV) and blind and visually impaired (BVI) literature, as well as the underlying policy landscape. Results guided prejourney ridesharing needs among BVI users, which were addressed in paper two via a survey with (n=90) transit service drivers, interviews with (n=12) BVI users, and prototype design evaluations with (n=6) users, all contributing to the Autonomous Vehicle Assistant: an award-winning and accessible ridesharing app. A subsequent study with (n=12) users, presented in paper three, focused on prejourney mapping to provide critical information access in future FAVs. Accessible in-vehicle interactions were explored in the fourth paper through a survey with (n=187) BVI users. Results prioritized nonvisual information about the trip and indicated the importance of situational awareness. This effort informed the design and evaluation of an ultrasonic haptic HMI intended to promote situational awareness with (n=14) participants (paper five), leading to a novel gestural-audio interface with (n=23) users (paper six). Strong support from users across these studies suggested positive outcomes in pursuit of actionable situational awareness and control. Cumulative results from this dissertation research program represent, to our knowledge, the single most comprehensive approach to FAV BVI accessibility to date. By considering both pre-journey and in-vehicle accessibility, results pave the way for autonomous driving experiences that enable meaningful interaction for BVI users across the complete trip of transportation. This new mode of accessible travel is predicted to transform independent travel for millions of people with visual impairment, leading to increased independence, mobility, and quality of life

    A Haptic Study to Inclusively Aid Teaching and Learning in the Discipline of Design

    Get PDF
    Designers are known to use a blend of manual and virtual processes to produce design prototype solutions. For modern designers, computer-aided design (CAD) tools are an essential requirement to begin to develop design concept solutions. CAD, together with augmented reality (AR) systems have altered the face of design practice, as witnessed by the way a designer can now change a 3D concept shape, form, color, pattern, and texture of a product by the click of a button in minutes, rather than the classic approach to labor on a physical model in the studio for hours. However, often CAD can limit a designer’s experience of being ‘hands-on’ with materials and processes. The rise of machine haptic1 (MH) tools have afforded a great potential for designers to feel more ‘hands-on’ with the virtual modeling processes. Through the use of MH, product designers are able to control, virtually sculpt, and manipulate virtual 3D objects on-screen. Design practitioners are well placed to make use of haptics, to augment 3D concept creation which is traditionally a highly tactile process. For similar reasoning, it could also be said that, non-sighted and visually impaired (NS, VI) communities could also benefit from using MH tools to increase touch-based interactions, thereby creating better access for NS, VI designers. In spite of this the use of MH within the design industry (specifically product design), or for use by the non-sighted community is still in its infancy. Therefore the full benefit of haptics to aid non-sighted designers has not yet been fully realised. This thesis empirically investigates the use of multimodal MH as a step closer to improving the virtual hands-on process, for the benefit of NS, VI and fully sighted (FS) Designer-Makers. This thesis comprises four experiments, embedded within four case studies (CS1-4). Case study 1and2 worked with self-employed NS, VI Art Makers at Henshaws College for the Blind and Visual Impaired. The study examined the effects of haptics on NS, VI users, evaluations of experience. Case study 3 and4, featuring experiments 3 and4, have been designed to examine the effects of haptics on distance learning design students at the Open University. The empirical results from all four case studies showed that NS, VI users were able to navigate and perceive virtual objects via the force from the haptically rendered objects on-screen. Moreover, they were assisted by the whole multimodal MH assistance, which in CS2 appeared to offer better assistance to NS versus FS participants. In CS3 and 4 MH and multimodal assistance afforded equal assistance to NS, VI, and FS, but haptics were not as successful in bettering the time results recorded in manual (M) haptic conditions. However, the collision data between M and MH showed little statistical difference. The thesis showed that multimodal MH systems, specifically used in kinesthetic mode have enabled human (non-disabled and disabled) to credibly judge objects within the virtual realm. It also shows that multimodal augmented tooling can improve the interaction and afford better access to the graphical user interface for a wider body of users

    Unifying terrain awareness for the visually impaired through real-time semantic segmentation.

    Get PDF
    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework
    • …
    corecore