16 research outputs found

    Contents

    Get PDF

    Fabrication of the Kinect Remote-controlled Cars and Planning of the Motion Interaction Courses

    Get PDF
    AbstractThis paper describes the fabrication of Kinect remote-controlled cars, using PC, Kinect sensor, interface control circuit, embedded controller, and brake device, as well as the planning of motion interaction courses. The Kinect sensor first detects the body movement of the user, and converts it into control commands. Then, the PC sends the commands to Arduino control panel via XBee wireless communication modules. The interface circuit is used to control movement and direction of motors, including forward and backward, left and right. In order to develop the content of Kinect motion interaction courses, this study conducted literature review to understand the curriculum contents, and invited experts for interviews to collect data on learning background, teaching contents and unit contents. Based on the data, the teaching units and outlines are developed for reference of curriculums

    A Smart Real-Time Standalone Route Recognition System for Visually Impaired Persons

    Get PDF
    Visual Impairment is a common disability that results in poor or no eyesight, whose victims suffer inconveniences in performing their daily tasks. Visually impaired persons require some aids to interact with their environment safely. Existing navigation systems like electronic travel aids (ETAs) are mostly cloud-based and rely heavily on the internet and google map. This implies that systems deployment in locations with poor internet facilities and poorly structured environments is not feasible. This paper proposed a smart real-time standalone route recognition system for visually impaired persons. The proposed system makes use of a pedestrian route network, an interconnection of paths and their associated route tables, for providing directions of known locations in real-time for the user. Federal University of Technology (FUT), Minna, Gidan Kwanu campus was used as the case study. The result obtained from testing of the device search strategy on the field showed that the complexity of the algorithm used in searching for paths in the pedestrian network is , at worst-case scenario, where N is the number of paths available in the network. The accuracy of path recognition is 100%. This implies that the developed system is reliable and can be used in recognizing and navigating routes by the visual impaired in real-time

    Detection and modelling of staircases using a wearable depth sensor

    Get PDF
    In this paper we deal with the perception task of a wearable navigation assistant. Specifically, we have focused on the detection of staircases because of the important role they play in indoor navigation due to the multi-floor reaching possibilities they bring and the lack of security they cause, specially for those who suffer from visual deficiencies. We use the depth sensing capacities of the modern RGB-D cameras to segment and classify the different elements that integrate the scene and then carry out the stair detection and modelling algorithm to retrieve all the information that might interest the user, i.e. the location and orientation of the staircase, the number of steps and the step dimensions. Experiments prove that the system is able to perform in real-time and works even under partial occlusions of the stairway

    Safe Local Navigation for Visually Impaired Users With a Time-of-Flight and Haptic Feedback Device

    Get PDF
    This paper presents ALVU (Array of Lidars and Vibrotactile Units), a contactless, intuitive, hands-free, and discreet wearable device that allows visually impaired users to detect low- and high-hanging obstacles, as well as physical boundaries in their immediate environment. The solution allows for safe local navigation in both confined and open spaces by enabling the user to distinguish free space from obstacles. The device presented is composed of two parts: a sensor belt and a haptic strap. The sensor belt is an array of time-of-flight distance sensors worn around the front of a user's waist, and the pulses of infrared light provide reliable and accurate measurements of the distances between the user and surrounding obstacles or surfaces. The haptic strap communicates the measured distances through an array of vibratory motors worn around the user's upper abdomen, providing haptic feedback. The linear vibration motors are combined with a point-loaded pretensioned applicator to transmit isolated vibrations to the user. We validated the device's capability in an extensive user study entailing 162 trials with 12 blind users. Users wearing the device successfully walked through hallways, avoided obstacles, and detected staircases.Andrea Bocelli FoundationNational Science Foundation (U.S.) (Grant NSF IIS1226883

    Stairs detection with odometry-aided traversal from a wearable RGB-D camera

    Get PDF
    Stairs are one of the most common structures present in human-made scenarios, but also one of the most dangerous for those with vision problems. In this work we propose a complete method to detect, locate and parametrise stairs with a wearable RGB-D camera. Our algorithm uses the depth data to determine if the horizontal planes in the scene are valid steps of a staircase judging their dimensions and relative positions. As a result we obtain a scaled model of the staircase with the spatial location and orientation with respect to the subject. The visual odometry is also estimated to continuously recover the current position and orientation of the user while moving. This enhances the system giving the ability to come back to previously detected features and providing location awareness of the user during the climb. Simultaneously, the detection of the staircase during the traversal is used to correct the drift of the visual odometry. A comparison of results of the stair detection with other state-of-the-art algorithms was performed using public dataset. Additional experiments have also been carried out, recording our own natural scenes with a chest-mounted RGB-D camera in indoor scenarios. The algorithm is robust enough to work in real-time and even under partial occlusions of the stair

    Airport Accessibility and Navigation Assistance for People with Visual Impairments

    Get PDF
    People with visual impairments often have to rely on the assistance of sighted guides in airports, which prevents them from having an independent travel experience. In order to learn about their perspectives on current airport accessibility, we conducted two focus groups that discussed their needs and experiences in-depth, as well as the potential role of assistive technologies. We found that independent navigation is a main challenge and severely impacts their overall experience. As a result, we equipped an airport with a Bluetooth Low Energy (BLE) beacon-based navigation system and performed a real-world study where users navigated routes relevant for their travel experience. We found that despite the challenging environment participants were able to complete their itinerary independently, presenting none to few navigation errors and reasonable timings. This study presents the first systematic evaluation posing BLE technology as a strong approach to increase the independence of visually impaired people in airports

    A Spot Reminder System for the Visually Impaired Based on a Smartphone Camera

    Get PDF
    The present paper proposes a smartphone-camera-based system to assist visually impaired users in recalling their memories related to important locations, called spots, that they visited. The memories are recorded as voice memos, which can be played back when the users return to the spots. Spot-to-spot correspondence is determined by image matching based on the scale invariant feature transform. The main contribution of the proposed system is to allow visually impaired users to associate arbitrary voice memos with arbitrary spots. The users do not need any special devices or systems except smartphones and do not need to remember the spots where the voice memos were recorded. In addition, the proposed system can identify spots in environments that are inaccessible to the global positioning system. The proposed system has been evaluated by two experiments: image matching tests and a user study. The experimental results suggested the effectiveness of the system to help visually impaired individuals, including blind individuals, recall information about regularly-visited spots

    Detecci贸n y modelado de escaleras con sensor RGB-D para asistencia personal

    Get PDF
    La habilidad de avanzar y moverse de manera efectiva por el entorno resulta natural para la mayor铆a de la gente, pero no resulta f谩cil de realizar bajo algunas circunstancias, como es el caso de las personas con problemas visuales o cuando nos movemos en entornos especialmente complejos o desconocidos. Lo que pretendemos conseguir a largo plazo es crear un sistema portable de asistencia aumentada para ayudar a quienes se enfrentan a esas circunstancias. Para ello nos podemos ayudar de c谩maras, que se integran en el asistente. En este trabajo nos hemos centrado en el m贸dulo de detecci贸n, dejando para otros trabajos el resto de m贸dulos, como podr铆a ser la interfaz entre la detecci贸n y el usuario. Un sistema de guiado de personas debe mantener al sujeto que lo utiliza apartado de peligros, pero tambi茅n deber铆a ser capaz de reconocer ciertas caracter铆sticas del entorno para interactuar con ellas. En este trabajo resolvemos la detecci贸n de uno de los recursos m谩s comunes que una persona puede tener que utilizar a lo largo de su vida diaria: las escaleras. Encontrar escaleras es doblemente beneficioso, puesto que no s贸lo permite evitar posibles ca铆das sino que ayuda a indicar al usuario la posibilidad de alcanzar otro piso en el edificio. Para conseguir esto hemos hecho uso de un sensor RGB-D, que ir谩 situado en el pecho del sujeto, y que permite captar de manera simult谩nea y sincronizada informaci贸n de color y profundidad de la escena. El algoritmo usa de manera ventajosa la captaci贸n de profundidad para encontrar el suelo y as铆 orientar la escena de la manera que aparece ante el usuario. Posteriormente hay un proceso de segmentaci贸n y clasificaci贸n de la escena de la que obtenemos aquellos segmentos que se corresponden con "suelo", "paredes", "planos horizontales" y una clase residual, de la que todos los miembros son considerados "obst谩culos". A continuaci贸n, el algoritmo de detecci贸n de escaleras determina si los planos horizontales son escalones que forman una escalera y los ordena jer谩rquicamente. En el caso de que se haya encontrado una escalera, el algoritmo de modelado nos proporciona toda la informaci贸n de utilidad para el usuario: c贸mo esta posicionada con respecto a 茅l, cu谩ntos escalones se ven y cu谩les son sus medidas aproximadas. En definitiva, lo que se presenta en este trabajo es un nuevo algoritmo de ayuda a la navegaci贸n humana en entornos de interior cuya mayor contribuci贸n es un algoritmo de detecci贸n y modelado de escaleras que determina toda la informaci贸n de mayor relevancia para el sujeto. Se han realizado experimentos con grabaciones de v铆deo en distintos entornos, consiguiendo buenos resultados tanto en precisi贸n como en tiempo de respuesta. Adem谩s se ha realizado una comparaci贸n de nuestros resultados con los extra铆dos de otras publicaciones, demostrando que no s贸lo se consigue una eciencia que iguala al estado de la materia sino que tambi茅n se aportan una serie de mejoras. Especialmente, nuestro algoritmo es el primero capaz de obtener las dimensiones de las escaleras incluso con obst谩culos obstruyendo parcialmente la vista, como puede ser gente subiendo o bajando. Como resultado de este trabajo se ha elaborado una publicaci贸n aceptada en el Second Workshop on Assitive Computer Vision and Robotics del ECCV, cuya presentaci贸n tiene lugar el 12 de Septiembre de 2014 en Z煤rich, Suiza
    corecore