210 research outputs found

    Detección y modelado de escaleras con sensor RGB-D para asistencia personal

    Get PDF
    La habilidad de avanzar y moverse de manera efectiva por el entorno resulta natural para la mayoría de la gente, pero no resulta fácil de realizar bajo algunas circunstancias, como es el caso de las personas con problemas visuales o cuando nos movemos en entornos especialmente complejos o desconocidos. Lo que pretendemos conseguir a largo plazo es crear un sistema portable de asistencia aumentada para ayudar a quienes se enfrentan a esas circunstancias. Para ello nos podemos ayudar de cámaras, que se integran en el asistente. En este trabajo nos hemos centrado en el módulo de detección, dejando para otros trabajos el resto de módulos, como podría ser la interfaz entre la detección y el usuario. Un sistema de guiado de personas debe mantener al sujeto que lo utiliza apartado de peligros, pero también debería ser capaz de reconocer ciertas características del entorno para interactuar con ellas. En este trabajo resolvemos la detección de uno de los recursos más comunes que una persona puede tener que utilizar a lo largo de su vida diaria: las escaleras. Encontrar escaleras es doblemente beneficioso, puesto que no sólo permite evitar posibles caídas sino que ayuda a indicar al usuario la posibilidad de alcanzar otro piso en el edificio. Para conseguir esto hemos hecho uso de un sensor RGB-D, que irá situado en el pecho del sujeto, y que permite captar de manera simultánea y sincronizada información de color y profundidad de la escena. El algoritmo usa de manera ventajosa la captación de profundidad para encontrar el suelo y así orientar la escena de la manera que aparece ante el usuario. Posteriormente hay un proceso de segmentación y clasificación de la escena de la que obtenemos aquellos segmentos que se corresponden con "suelo", "paredes", "planos horizontales" y una clase residual, de la que todos los miembros son considerados "obstáculos". A continuación, el algoritmo de detección de escaleras determina si los planos horizontales son escalones que forman una escalera y los ordena jerárquicamente. En el caso de que se haya encontrado una escalera, el algoritmo de modelado nos proporciona toda la información de utilidad para el usuario: cómo esta posicionada con respecto a él, cuántos escalones se ven y cuáles son sus medidas aproximadas. En definitiva, lo que se presenta en este trabajo es un nuevo algoritmo de ayuda a la navegación humana en entornos de interior cuya mayor contribución es un algoritmo de detección y modelado de escaleras que determina toda la información de mayor relevancia para el sujeto. Se han realizado experimentos con grabaciones de vídeo en distintos entornos, consiguiendo buenos resultados tanto en precisión como en tiempo de respuesta. Además se ha realizado una comparación de nuestros resultados con los extraídos de otras publicaciones, demostrando que no sólo se consigue una eciencia que iguala al estado de la materia sino que también se aportan una serie de mejoras. Especialmente, nuestro algoritmo es el primero capaz de obtener las dimensiones de las escaleras incluso con obstáculos obstruyendo parcialmente la vista, como puede ser gente subiendo o bajando. Como resultado de este trabajo se ha elaborado una publicación aceptada en el Second Workshop on Assitive Computer Vision and Robotics del ECCV, cuya presentación tiene lugar el 12 de Septiembre de 2014 en Zúrich, Suiza

    Unifying terrain awareness for the visually impaired through real-time semantic segmentation.

    Get PDF
    Navigational assistance aims to help visually-impaired people to ambulate the environment safely and independently. This topic becomes challenging as it requires detecting a wide variety of scenes to provide higher level assistive awareness. Vision-based technologies with monocular detectors or depth sensors have sprung up within several years of research. These separate approaches have achieved remarkable results with relatively low processing time and have improved the mobility of impaired people to a large extent. However, running all detectors jointly increases the latency and burdens the computational resources. In this paper, we put forward seizing pixel-wise semantic segmentation to cover navigation-related perception needs in a unified way. This is critical not only for the terrain awareness regarding traversable areas, sidewalks, stairs and water hazards, but also for the avoidance of short-range obstacles, fast-approaching pedestrians and vehicles. The core of our unification proposal is a deep architecture, aimed at attaining efficient semantic understanding. We have integrated the approach in a wearable navigation system by incorporating robust depth segmentation. A comprehensive set of experiments prove the qualified accuracy over state-of-the-art methods while maintaining real-time speed. We also present a closed-loop field test involving real visually-impaired users, demonstrating the effectivity and versatility of the assistive framework

    Stair Negotiation Made Easier using Novel Interactive Energy-Recycling Assistive Stairs

    Get PDF
    Here we show that novel, energy-recycling stairs reduce the amount of work required for humans to both ascend and descend stairs. Our low-power, interactive, and modular steps can be placed on existing staircases, storing energy during stair descent and returning that energy to the user during stair ascent. Energy is recycled through event-triggered latching and unlatching of passive springs without the use of powered actuators. When ascending the energy-recycling stairs, naive users generated 17.4 ± 6.9% less positive work with their leading legs compared to conventional stairs, with the knee joint positive work reduced by 37.7 ± 10.5%. Users also generated 21.9 ± 17.8% less negative work with their trailing legs during stair descent, with ankle joint negative work reduced by 26.0 ± 15.9%. Our low-power energy-recycling stairs have the potential to assist people with mobility impairments during stair negotiation on existing staircases

    Daily locomotion recognition and prediction: A kinematic data-based machine learning approach

    Get PDF
    More versatile, user-independent tools for recognizing and predicting locomotion modes (LMs) and LM transitions (LMTs) in natural gaits are still needed. This study tackles these challenges by proposing an automatic, user-independent recognition and prediction tool using easily wearable kinematic motion sensors for innovatively classifying several LMs (walking direction, level-ground walking, ascend and descend stairs, and ascend and descend ramps) and respective LMTs. We compared diverse state-of-the-art feature processing and dimensionality reduction methods and machine-learning classifiers to find an effective tool for recognition and prediction of LMs and LMTs. The comparison included kinematic patterns from 10 able-bodied subjects. The more accurate tools were achieved using min-max scaling [-1; 1] interval and 'mRMR plus forward selection' algorithm for feature normalization and dimensionality reduction, respectively, and Gaussian support vector machine classifier. The developed tool was accurate in the recognition (accuracy >99% and >96%) and prediction (accuracy >99% and >93%) of daily LMs and LMTs, respectively, using exclusively kinematic data. The use of kinematic data yielded an effective recognition and prediction tool, predicting the LMs and LMTs one-step-ahead. This timely prediction is relevant for assistive devices providing personalized assistance in daily scenarios. The kinematic data-based machine learning tool innovatively addresses several LMs and LMTs while allowing the user to self-select the leading limb to perform LMTs, ensuring a natural gait.This work was supported in part by the Fundação para a Ciência e Tecnologia (FCT) with the Reference Scholarship under Grant SFRH/BD/108309/2015 and SFRH/BD/147878/2019, by the FEDER Funds through the Programa Operacional Regional do Norte and national funds from FCT with the project SmartOs under Grant NORTE-01-0145-FEDER-030386, and through the COMPETE 2020—Programa Operacional Competitividade e Internacionalização (POCI)—with the Reference Project under Grant POCI-01-0145-FEDER-006941

    Design and Development of Assistive Robots for Close Interaction with People with Disabilities

    Get PDF
    People with mobility and manipulation impairments wish to live and perform tasks as independently as possible; however, for many tasks, compensatory technology does not exist, to do so. Assistive robots have the potential to address this need. This work describes various aspects of the development of three novel assistive robots: the Personal Mobility and Manipulation Appliance (PerMMA), the Robotic Assisted Transfer Device (RATD), and the Mobility Enhancement Robotic Wheelchair (MEBot). PerMMA integrates mobility with advanced bi-manual manipulation to assist people with both upper and lower extremity impairments. The RATD is a wheelchair mounted robotic arm that can lift higher payloads and its primary aim is to assist caregivers of people who cannot independently transfer from their electric powered wheelchair to other surfaces such as a shower bench or toilet. MEBot is a wheeled robot that has highly reconfigurable kinematics, which allow it to negotiate challenging terrain, such as steep ramps, gravel, or stairs. A risk analysis was performed on all three robots which included a Fault Tree Analysis (FTA) and a Failure Mode Effect Analysis (FMEA) to identify potential risks and inform strategies to mitigate them. Identified risks or PerMMA include dropping sharp or hot objects. Critical risks identified for RATD included tip over, crush hazard, and getting stranded mid-transfer, and risks for MEBot include getting stranded on obstacles and tip over. Lastly, several critical factors, such as early involvement of people with disabilities, to guide future assistive robot design are presented

    Application of Smart Insoles for Recognition of Activities of Daily Living: A Systematic Review

    Get PDF
    Recent years have witnessed the increasing literature on using smart insoles in health and well-being, and yet, their capability of daily living activity recognition has not been reviewed. This paper addressed this need and provided a systematic review of smart insole-based systems in the recognition of Activities of Daily Living (ADLs). The review followed the PRISMA guidelines, assessing the sensing elements used, the participants involved, the activities recognised, and the algorithms employed. The findings demonstrate the feasibility of using smart insoles for recognising ADLs, showing their high performance in recognising ambulation and physical activities involving the lower body, ranging from 70% to 99.8% of Accuracy, with 13 studies over 95%. The preferred solutions have been those including machine learning. A lack of existing publicly available datasets has been identified, and the majority of the studies were conducted in controlled environments. Furthermore, no studies assessed the impact of different sampling frequencies during data collection, and a trade-off between comfort and performance has been identified between the solutions. In conclusion, real-life applications were investigated showing the benefits of smart insoles over other solutions and placing more emphasis on the capabilities of smart insoles

    Low obstacles avoidance for lower limb exoskeletons

    Get PDF
    Gli esoscheletri motorizzati per gli arti inferiori (LLEs) sono robot indossabili che permettono a soggetti con disabilità degli arti inferiori di camminare indipendentemente, e persino migliorare le prestazioni degli arti inferiori nel caso di soggetti sani. Nonostante i recenti sviluppi, l'uso di questa promettente tecnologia è relegato agli ambiti clinici e di ricerca; il suo utilizzo come strumento per camminare in modo indipendente in ambienti non controllati è ancora mancante. Il motivo principale di questa limitazione è dovuto alla mancanza di adattabilità degli LLE ai diversi ambienti che possono essere incontrati durante il cammino: la maggioranza degli LLE sfrutta traiettorie predefinite degli arti inferiori senza valutare l'ambiente circostante. Questo implica che ogni tipo di controllo addizionale è a carico dell'utente, e risulta in un sovraccarico fisico e cognitivo da parte di quest'ultimo. Questa tesi si pone l'obbiettivo di superare le limitazioni sopracitate, proponendo un approccio innovativo per aumentare l'autonomia degli LLE. In particolare, il metodo proposto ha lo scopo di stimare la traiettoria degli arti inferiori ottimale, così da poter superare in modo autonomo gli ostacoli bassi che potrebbero essere incontrati lungo il cammino. Tramite l'uso di una stereo camera unita ad un algoritmo di Computer Vision, l'ambiente viene percepito in modo da identificare il pavimento e gli ostacoli che potrebbero influenzare il cammino con l'obbiettivo di selezionare il punto d'appoggio ottimale per il piede. Successivamente, un algoritmo iterativo per la generazione della traiettoria del piede senza collisioni (CFFTG) permette di ottenere i dati necessari a calcolare la cinematica inversa dell'esoscheletro, ed infine gli angoli ai giunti ottenuti da quest'ultima vengono forniti ai controllori dei motori per effettuare il movimento desiderato. Test sperimentali in simulazione (basati su dati reali) sono stati eseguiti per valutare il comportamento dell'algoritmo di Computer Vision e del CFFTG, mostrando ottimi risultati in diversi scenari. Inoltre, le assunzioni su cui si basa la soluzione proposta permettono la sua compatibilità con la maggioranza degli esoscheletri commerciali e di ricerca attualmente disponibili. Credo che pensare agli esoscheletri come degli agenti semi autonomi, piuttosto che come dei dispositivi controllati unicamente dall'utente, rappresenti non solo un percorso che porterà alla simbiosi tra uomo ed esoscheletro, ma anche all'uso di questa tecnologia nella vita di tutti i giorni.Powered lower limb exoskeletons (LLEs) are innovative wearable robots that allow independent walking in people with severe gait impairments, or even to augment lower limb capabilities of able-bodied users. Despite the recent advancements, the use of this promising technology is still restricted to controlled research/clinical settings; uptake in real-life conditions as a device to promote user independence is still lacking. The main reason behind this limitation can be traced back to the lack adaptability of LLEs to the different walking conditions that may be encountered in real world settings: the majority of LLEs relies on predefined gait trajectories and is generally unaware of the environment in which gait occurs. This means that the control burden is entirely on the user, resulting in an increased physical and cognitive workload. This thesis aims at overcoming the aforementioned limitations by proposing a novel approach to enhance the autonomy of the LLEs. In particular, the proposed method has the purpose of estimating the optimal gait trajectory of the exoskeleton in order to autonomously avoid low obstacles on the ground. By using a depth camera coupled with Computer Vision software module, the environment is sensed to detect the ground plane and obstacles that might interfere with the forward motion, in order to predict the following foothold. Then, an iterative-based collision-free foot trajectory generator (CFFTG) algorithm is proposed to calculate the optimal foot motion and the joints’ angles to be sent to the exoskeleton low-level controllers. Experimental tests have been carried out in simulation to evaluate both the CV module and the CFFTG based on real data, showing successful performance in different scenarios. In addition, the assumptions that have been considered in this work make the proposed approach compatible with the majority of exoskeletons in research and on the market. I believe that re-thinking exoskeletons as semi-autonomous agents will represent not only the cornerstone to promote a more symbiotic human-exoskeleton interaction but may also pave the way for the use of this technology in the everyday life

    Human Action Recognition with RGB-D Sensors

    Get PDF
    none3noHuman action recognition, also known as HAR, is at the foundation of many different applications related to behavioral analysis, surveillance, and safety, thus it has been a very active research area in the last years. The release of inexpensive RGB-D sensors fostered researchers working in this field because depth data simplify the processing of visual data that could be otherwise difficult using classic RGB devices. Furthermore, the availability of depth data allows to implement solutions that are unobtrusive and privacy preserving with respect to classic video-based analysis. In this scenario, the aim of this chapter is to review the most salient techniques for HAR based on depth signal processing, providing some details on a specific method based on temporal pyramid of key poses, evaluated on the well-known MSR Action3D dataset.Cippitelli, Enea; Gambi, Ennio; Spinsante, SusannaCippitelli, Enea; Gambi, Ennio; Spinsante, Susann

    Human Action Recognition with RGB-D Sensors

    Get PDF
    Human action recognition, also known as HAR, is at the foundation of many different applications related to behavioral analysis, surveillance, and safety, thus it has been a very active research area in the last years. The release of inexpensive RGB-D sensors fostered researchers working in this field because depth data simplify the processing of visual data that could be otherwise difficult using classic RGB devices. Furthermore, the availability of depth data allows to implement solutions that are unobtrusive and privacy preserving with respect to classic video-based analysis. In this scenario, the aim of this chapter is to review the most salient techniques for HAR based on depth signal processing, providing some details on a specific method based on temporal pyramid of key poses, evaluated on the well-known MSR Action3D dataset
    corecore