869 research outputs found

    A mosaic of eyes

    Get PDF
    Autonomous navigation is a traditional research topic in intelligent robotics and vehicles, which requires a robot to perceive its environment through onboard sensors such as cameras or laser scanners, to enable it to drive to its goal. Most research to date has focused on the development of a large and smart brain to gain autonomous capability for robots. There are three fundamental questions to be answered by an autonomous mobile robot: 1) Where am I going? 2) Where am I? and 3) How do I get there? To answer these basic questions, a robot requires a massive spatial memory and considerable computational resources to accomplish perception, localization, path planning, and control. It is not yet possible to deliver the centralized intelligence required for our real-life applications, such as autonomous ground vehicles and wheelchairs in care centers. In fact, most autonomous robots try to mimic how humans navigate, interpreting images taken by cameras and then taking decisions accordingly. They may encounter the following difficulties

    Implementation of target tracking in Smart Wheelchair Component System

    Get PDF
    Independent mobility is critical to individuals of any age. While the needs of many individuals with disabilities can be satisfied with power wheelchairs, some members of the disabled community find it difficult or impossible to operate a standard power wheelchair. This population includes, but is not limited to, individuals with low vision, visual field neglect, spasticity, tremors, or cognitive deficits. To meet the needs of this population, our group is involved in developing cost effective modularly designed Smart Wheelchairs. Our objective is to develop an assistive navigation system which will seamlessly integrate into the lifestyle of individual with disabilities and provide safe and independent mobility and navigation without imposing an excessive physical or cognitive load. The Smart Wheelchair Component System (SWCS) can be added to a variety of commercial power wheelchairs with minimal modification to provide navigation assistance. Previous versions of the SWCS used acoustic and infrared rangefinders to identify and avoid obstacles, but these sensors do not lend themselves to many desirable higher-level behaviors. To achieve these higher level behaviors we integrated a Continuously Adapted Mean Shift (CAMSHIFT) target tracking algorithm into the SWCS, along with the Minimal Vector Field Histogram (MVFH) obstacle avoidance algorithm. The target tracking algorithm provides the basis for two distinct operating modes: (1) a "follow-the-leader" mode, and (2) a "move to stationary target" mode.The ability to track a stationary or moving target will make smart wheelchairs more useful as a mobility aid, and is also expected to be useful for wheeled mobility training and evaluation. In addition to wheelchair users, the caregivers, clinicians, and transporters who provide assistance to wheelchair users will also realize beneficial effects of providing safe and independent mobility to wheelchair users which will reduce the level of assistance needed by wheelchair users

    Virtual Prototyping and Validation for Autonomous Assistive Mobility

    Get PDF
    Physical disability in humans is something that no one has control over. It is catastrophic for people to experience it, and sometimes hinders them from enjoying the essence of life. Though disability is something people have no control over, they do have control on how they can go forward and make life beautiful and meaningful. Wheelchairs have helped people with disabilities in mobility, allowing them to move around with the help of others or sometimes on their own. This research will focus on development of an autonomous assistive mobility robot to help the disabled using virtual prototyping tools for development and validation. The developed virtual model will also be developed in real world and validated to navigate autonomously. The virtual and real-world autonomous model developed will take a systems engineering approach. The key features of this system are mapping, localisation, and navigating towards a goal autonomously. The virtual model is validated in different virtual environments for its functionality. The real-world model is developed similar to its virtual counterpart and is tested and validated for its functionality. The local path planner implemented is analyzed quantitatively for both the real-world and virtual models. The differences in design and development are analyzed and identified. To conclude, the research has lead to the development of a virtual and real-world model of an autonomous wheelchair that has been tested and validated in both the environments

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    A.Eye Drive: gaze-based semi-autonomous wheelchair interface

    Get PDF
    Existing wheelchair control interfaces, such as sip & puff or screen based gaze-controlled cursors, are challenging for the severely disabled to navigate safely and independently as users continuously need tointeract with an interface during navigation. This putsa significant cognitive load on users and prevents them from interacting with the environment in other forms during navigation. We have combined eyetracking/gaze-contingent intention decoding with computervision context-awarealgorithms and autonomous navigation drawn fromself-driving vehicles to allow paralysed users to drive by eye, simply by decoding natural gaze about where the user wants to go: A.Eye Drive. Our “Zero UI” driving platform allows users to look and interact visually with at an objector destination of interest in their visual scene, and the wheelchairautonomously takes the user to the intended destination, while continuously updating the computed path for static and dynamic obstacles. This intention decoding technology empowers the end-user by promising more independence through their own agency

    An Incremental Navigation Localization Methodology for Application to Semi-Autonomous Mobile Robotic Platforms to Assist Individuals Having Severe Motor Disabilities.

    Get PDF
    In the present work, the author explores the issues surrounding the design and development of an intelligent wheelchair platform incorporating the semi-autonomous system paradigm, to meet the needs of individuals with severe motor disabilities. The author presents a discussion of the problems of navigation that must be solved before any system of this type can be instantiated, and enumerates the general design issues that must be addressed by the designers of systems of this type. This discussion includes reviews of various methodologies that have been proposed as solutions to the problems considered. Next, the author introduces a new navigation method, called Incremental Signature Recognition (ISR), for use by semi-autonomous systems in structured environments. This method is based on the recognition, recording, and tracking of environmental discontinuities: sensor reported anomalies in measured environmental parameters. The author then proposes a robust, redundant, dynamic, self-diagnosing sensing methodology for detecting and compensating for hidden failures of single sensors and sensor idiosyncrasies. This technique is optimized for the detection of spatial discontinuity anomalies. Finally, the author gives details of an effort to realize a prototype ISR based system, along with insights into the various implementation choices made

    Gaze-tracking-based interface for robotic chair guidance

    Get PDF
    This research focuses on finding solutions to enhance the quality of life for wheelchair users, specifically by applying a gaze-tracking-based interface for the guidance of a robotized wheelchair. For this purpose, the interface was applied in two different approaches for the wheelchair control system. The first one was an assisted control in which the user was continuously involved in controlling the movement of the wheelchair in the environment and the inclination of the different parts of the seat through the user’s gaze and eye blinks obtained with the interface. The second approach was to take the first steps to apply the device to an autonomous wheelchair control in which the wheelchair moves autonomously avoiding collisions towards the position defined by the user. To this end, the basis for obtaining the gaze position relative to the wheelchair and the object detection was developed in this project to be able to calculate in the future the optimal route to which the wheelchair should move. In addition, the integration of a robotic arm in the wheelchair to manipulate different objects was also considered, obtaining in this work the object of interest indicated by the user's gaze within the detected objects so that in the future the robotic arm could select and pick up the object the user wants to manipulate. In addition to the two approaches, an attempt was also made to estimate the user's gaze without the software interface. For this purpose, the gaze is obtained from pupil detection libraries, a calibration and a mathematical model that relates pupil positions to gaze. The results of the implementations have been analysed in this work, including some limitations encountered. Nevertheless, future improvements are proposed, with the aim of increasing the independence of wheelchair user

    Brain-Computer Interface meets ROS: A robotic approach to mentally drive telepresence robots

    Get PDF
    This paper shows and evaluates a novel approach to integrate a non-invasive Brain-Computer Interface (BCI) with the Robot Operating System (ROS) to mentally drive a telepresence robot. Controlling a mobile device by using human brain signals might improve the quality of life of people suffering from severe physical disabilities or elderly people who cannot move anymore. Thus, the BCI user is able to actively interact with relatives and friends located in different rooms thanks to a video streaming connection to the robot. To facilitate the control of the robot via BCI, we explore new ROS-based algorithms for navigation and obstacle avoidance, making the system safer and more reliable. In this regard, the robot can exploit two maps of the environment, one for localization and one for navigation, and both can be used also by the BCI user to watch the position of the robot while it is moving. As demonstrated by the experimental results, the user's cognitive workload is reduced, decreasing the number of commands necessary to complete the task and helping him/her to keep attention for longer periods of time.Comment: Accepted in the Proceedings of the 2018 IEEE International Conference on Robotics and Automatio

    Estratégias de controle de trajetórias para cadeira de rodas robotizadas

    Get PDF
    Orientador: Eleri CardozoDissertação (mestrado) - Universidade Estadual de Campinas, Faculdade de Engenharia Elétrica e de ComputaçãoResumo: Desde os anos 80, diversos trabalhos foram publicados com o objetivo de propor soluções alternativas para usuários de cadeira de rodas motorizadas com severa deficiência motora e que não possuam capacidade de operar um joystick mecânico. Dentre essas soluções estão interfaces assistivas que auxiliam no comando da cadeira de rodas através de diversos mecanismos como expressões faciais, interfaces cérebro-computador, e rastreamento de olho. Além disso, as cadeiras de rodas ganharam certa autonomia para realizar determinadas tarefas que vão de desviar de obstáculos, abrir portas e até planejar e executar rotas. Para que estas tarefas possam ser executadas, é necessário que as cadeiras de rodas tenham estruturas não convencionais, habilidade de sensoriamento do ambiente e estratégias de controle de locomoção. O objetivo principal é disponibilizar uma cadeira de rodas que ofereça conforto ao usuário e que possua um condução segura não importando o tipo de deficiência do usuário. Entretanto, durante a condução da cadeira de rodas, o desalinhamento das rodas castores podem oferecer certo perigo ao usuário, uma vez que, dependendo da maneira em que elas estejam orientadas, instabilidades podem ocorrer, culminando em acidentes. Da mesma forma, o desalinhamento das rodas castores é considerado um dos principais causadores de desvios de trajetória que ocorrem durante a movimentação da cadeira de rodas, juntamente com diferentes distribuições de pesos ou diferentes atritos entre as rodas e o chão. Nesta dissertação, é considerado apenas o desalinhamento das rodas castores como único causador de desvio de trajetória da cadeira de rodas e, dessa forma, são propostas soluções que possam reduzir ou até mesmo eliminar o efeito deste desalinhamento. Com a implementação das melhores soluções desenvolvidas neste trabalho, é possível fazer com que diversas interfaces assistivas que têm baixa taxa de comandos possam ser utilizadas, uma vez que o usuário não precisa, constantemente, corrigir o desvio da trajetória desejada. Ademais, é elaborado um novo projeto de cadeira de rodas "inteligente" para a implementação das técnicas desenvolvidas neste trabalhoAbstract: Since the 1980s several works were published proposing alternative solutions for users of powered wheelchairs with severe mobility impairments and that are not able to operate a mechanical joystick. Such solutions commonly focus on assistive interfaces that help commanding the wheelchair through distinct mechanisms such as facial expressions, brain-computer interfaces, and eye tracking. Besides that, the wheelchairs have achieved a certain level of autonomy to accomplish determined tasks such as obstacle avoidance, doors opening and even path planning and execution. For these tasks to be performed, it is necessary the wheelchairs to have a non conventional designs, ability to sense the environment and locomotion control strategies. The ultimate objective is to offer a comfortable and safe conduction no matter the user's mobility impairments. However, while driving the wheelchair, the caster wheels' misalignment might offer risks to the user, because, depending on the way they are initially oriented, instabilities may occur causing accidents. Similarly, the caster wheels' misalignment can be considered, among others like different weight distribution or different friction between wheel and floor, one of the main causes of path deviation from the intended trajectory while the wheelchair is moving. In this dissertation, it is considered the caster wheels' misalignment as the unique generator of wheelchair path deviation and, therefore, it is proposed different solutions in order to reduce or even eliminate the effects of the misalignment. The implementation of the best solutions developed in this work allows assistive interfaces with low rate of commands to be widespread, once the user does not need to, constantly, correct path deviation. Additionally, a new smart wheelchair project is elaborated for the implementation of the techniques developed in this workMestradoEngenharia de ComputaçãoMestre em Engenharia Elétrica88882.329382/2019-01CAPE
    corecore