3,091 research outputs found

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Unifying terrain awareness for the visually impaired through real-time semantic segmentation.

    Get PDF
    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework

    RASPV: A robotics framework for augmented simulated prosthetic vision

    Get PDF
    One of the main challenges of visual prostheses is to augment the perceived information to improve the experience of its wearers. Given the limited access to implanted patients, in order to facilitate the experimentation of new techniques, this is often evaluated via Simulated Prosthetic Vision (SPV) with sighted people. In this work, we introduce a novel SPV framework and implementation that presents major advantages with respect to previous approaches. First, it is integrated into a robotics framework, which allows us to benefit from a wide range of methods and algorithms from the field (e.g. object recognition, obstacle avoidance, autonomous navigation, deep learning). Second, we go beyond traditional image processing with 3D point clouds processing using an RGB-D camera, allowing us to robustly detect the floor, obstacles and the structure of the scene. Third, it works either with a real camera or in a virtual environment, which gives us endless possibilities for immersive experimentation through a head-mounted display. Fourth, we incorporate a validated temporal phosphene model that replicates time effects into the generation of visual stimuli. Finally, we have proposed, developed and tested several applications within this framework, such as avoiding moving obstacles, providing a general understanding of the scene, staircase detection, helping the subject to navigate an unfamiliar space, and object and person detection. We provide experimental results in real and virtual environments. The code is publicly available at https://www.github.com/aperezyus/RASP

    The benefits of using a walking interface to navigate virtual environments

    No full text
    Navigation is the most common interactive task performed in three-dimensional virtual environments (VEs), but it is also a task that users often find difficult. We investigated how body-based information about the translational and rotational components of movement helped participants to perform a navigational search task (finding targets hidden inside boxes in a room-sized space). When participants physically walked around the VE while viewing it on a head-mounted display (HMD), they then performed 90% of trials perfectly, comparable to participants who had performed an equivalent task in the real world during a previous study. By contrast, participants performed less than 50% of trials perfectly if they used a tethered HMD (move by physically turning but pressing a button to translate) or a desktop display (no body-based information). This is the most complex navigational task in which a real-world level of performance has been achieved in a VE. Behavioral data indicates that both translational and rotational body-based information are required to accurately update one's position during navigation, and participants who walked tended to avoid obstacles, even though collision detection was not implemented and feedback not provided. A walking interface would bring immediate benefits to a number of VE applications

    Compact Environment Modelling from Unconstrained Camera Platforms

    Get PDF
    Mobile robotic systems need to perceive their surroundings in order to act independently. In this work a perception framework is developed which interprets the data of a binocular camera in order to transform it into a compact, expressive model of the environment. This model enables a mobile system to move in a targeted way and interact with its surroundings. It is shown how the developed methods also provide a solid basis for technical assistive aids for visually impaired people

    A snake-based scheme for path planning and control with constraints by distributed visual sensors

    Get PDF
    YesThis paper proposes a robot navigation scheme using wireless visual sensors deployed in an environment. Different from the conventional autonomous robot approaches, the scheme intends to relieve massive on-board information processing required by a robot to its environment so that a robot or a vehicle with less intelligence can exhibit sophisticated mobility. A three-state snake mechanism is developed for coordinating a series of sensors to form a reference path. Wireless visual sensors communicate internal forces with each other along the reference snake for dynamic adjustment, react to repulsive forces from obstacles, and activate a state change in the snake body from a flexible state to a rigid or even to a broken state due to kinematic or environmental constraints. A control snake is further proposed as a tracker of the reference path, taking into account the robot’s non-holonomic constraint and limited steering power. A predictive control algorithm is developed to have an optimal velocity profile under robot dynamic constraints for the snake tracking. They together form a unified solution for robot navigation by distributed sensors to deal with the kinematic and dynamic constraints of a robot and to react to dynamic changes in advance. Simulations and experiments demonstrate the capability of a wireless sensor network to carry out low-level control activities for a vehicle.Royal Society, Natural Science Funding Council (China

    Development of rear-end collision avoidance in automobiles

    Get PDF
    The goal of this work is to develop a Rear-End Collision Avoidance System for automobiles. In order to develop the Rear-end Collision Avoidance System, it is stated that the most important difference from the old practice is the fact that new design approach attempts to completely avoid collision instead of minimizing the damage by over-designing cars. Rear-end collisions are the third highest cause of multiple vehicle fatalities in the U.S. Their cause seems to be a result of poor driver awareness and communication. For example, car brake lights illuminate exactly the same whether the car is slowing, stopping or the driver is simply resting his foot on the pedal. In the development of Rear-End Collision Avoidance System (RECAS), a thorough review of hardware, software, driver/human factors, and current rear-end collision avoidance systems are included. Key sensor technologies are identified and reviewed in an attempt to ease the design effort. The characteristics and capabilities of alternative and emerging sensor technologies are also described and their performance compared. In designing a RECAS the first component is to monitor the distance and speed of the car ahead. If an unsafe condition is detected a warning is issued and the vehicle is decelerated (if necessary). The second component in the design effort utilizes the illumination of independent segments of brake lights corresponding to the stopping condition of the car. This communicates the stopping intensity to the following driver. The RECAS is designed the using the LabVIEW software. The simulation is designed to meet several criteria: System warnings should result in a minimum load on driver attention, and the system should also perform well in a variety of driving conditions. In order to illustrate and test the proposed RECAS methods, a Java program has been developed. This simulation animates a multi-car, multi-lane highway environment where car speeds are assigned randomly, and the proposed RECAS approaches demonstrate rear-end collision avoidance successfully. The Java simulation is an applet, which is easily accessible through the World Wide Web and also can be tested for different angles of the sensor

    Haptics Rendering and Applications

    Get PDF
    There has been significant progress in haptic technologies but the incorporation of haptics into virtual environments is still in its infancy. A wide range of the new society's human activities including communication, education, art, entertainment, commerce and science would forever change if we learned how to capture, manipulate and reproduce haptic sensory stimuli that are nearly indistinguishable from reality. For the field to move forward, many commercial and technological barriers need to be overcome. By rendering how objects feel through haptic technology, we communicate information that might reflect a desire to speak a physically- based language that has never been explored before. Due to constant improvement in haptics technology and increasing levels of research into and development of haptics-related algorithms, protocols and devices, there is a belief that haptics technology has a promising future
    • …
    corecore