22 research outputs found

    A non-holonomic, highly human-in-the-loop compatible, assistive mobile robotic platform guidance navigation and control strategy

    Get PDF
    The provision of assistive mobile robotics for empowering and providing independence to the infirm, disabled and elderly in society has been the subject of much research. The issue of providing navigation and control assistance to users, enabling them to drive their powered wheelchairs effectively, can be complex and wide-ranging; some users fatigue quickly and can find that they are unable to operate the controls safely, others may have brain injury re-sulting in periodic hand tremors, quadriplegics may use a straw-like switch in their mouth to provide a digital control signal. Advances in autonomous robotics have led to the development of smart wheelchair systems which have attempted to address these issues; however the autonomous approach has, ac-cording to research, not been successful; users reporting that they want to be active drivers and not passengers. Recent methodologies have been to use collaborative or shared control which aims to predict or anticipate the need for the system to take over control when some pre-decided threshold has been met, yet these approaches still take away control from the us-er. This removal of human supervision and control by an autonomous system makes the re-sponsibility for accidents seriously problematic. This thesis introduces a new human-in-the-loop control structure with real-time assistive lev-els. One of these levels offers improved dynamic modelling and three of these levels offer unique and novel real-time solutions for: collision avoidance, localisation and waypoint iden-tification, and assistive trajectory generation. This architecture and these assistive functions always allow the user to remain fully in control of any motion of the powered wheelchair, shown in a series of experiments

    Brain-Controlled Wheelchairs: A Robotic Architecture

    Get PDF
    Independent mobility is core to being able to perform activities of daily living by oneself. However, powered wheelchairs are not an option for a large number of people who are unable to use conventional interfaces, due to severe motor–disabilities. Non-invasive brain–computer interfaces (BCIs) offer a promising solution to this interaction problem and in this article we present a shared control architecture that couples the intelligence and desires of the user with the precision of a powered wheelchair. We show how four healthy subjects are able to master control of the wheelchair using an asynchronous motor–imagery based BCI protocol and how this results in a higher overall task performance, compared with alternative synchronous P300–based approaches

    Behavioural strategy for indoor mobile robot navigation in dynamic environments

    Get PDF
    PhD ThesisDevelopment of behavioural strategies for indoor mobile navigation has become a challenging and practical issue in a cluttered indoor environment, such as a hospital or factory, where there are many static and moving objects, including humans and other robots, all of which trying to complete their own specific tasks; some objects may be moving in a similar direction to the robot, whereas others may be moving in the opposite direction. The key requirement for any mobile robot is to avoid colliding with any object which may prevent it from reaching its goal, or as a consequence bring harm to any individual within its workspace. This challenge is further complicated by unobserved objects suddenly appearing in the robots path, particularly when the robot crosses a corridor or an open doorway. Therefore the mobile robot must be able to anticipate such scenarios and manoeuvre quickly to avoid collisions. In this project, a hybrid control architecture has been designed to navigate within dynamic environments. The control system includes three levels namely: deliberative, intermediate and reactive, which work together to achieve short, fast and safe navigation. The deliberative level creates a short and safe path from the current position of the mobile robot to its goal using the wavefront algorithm, estimates the current location of the mobile robot, and extracts the region from which unobserved objects may appear. The intermediate level links the deliberative level and the reactive level, that includes several behaviours for implementing the global path in such a way to avoid any collision. In avoiding dynamic obstacles, the controller has to identify and extract obstacles from the sensor data, estimate their speeds, and then regular its speed and direction to minimize the collision risk and maximize the speed to the goal. The velocity obstacle approach (VO) is considered an easy and simple method for avoiding dynamic obstacles, whilst the collision cone principle is used to detect the collision situation between two circular-shaped objects. However the VO approach has two challenges when applied in indoor environments. The first challenge is extraction of collision cones of non-circular objects from sensor data, in which applying fitting circle methods generally produces large and inaccurate collision cones especially for line-shaped obstacle such as walls. The second challenge is that the mobile robot cannot sometimes move to its goal because all its velocities to the goal are located within collision cones. In this project, a method has been demonstrated to extract the colliii sion cones of circular and non-circular objects using a laser sensor, where the obstacle size and the collision time are considered to weigh the robot velocities. In addition the principle of the virtual obstacle was proposed to minimize the collision risk with unobserved moving obstacles. The simulation and experiments using the proposed control system on a Pioneer mobile robot showed that the mobile robot can successfully avoid static and dynamic obstacles. Furthermore the mobile robot was able to reach its target within an indoor environment without causing any collision or missing the target

    Audification of Ultrasound for Human Echolocation

    Get PDF
    Individuals with functional blindness must often utilise assistive aids to enable them to complete tasks of daily living. One of these tasks, locomotion, poses considerable risk. The long white cane is often used to perform haptic exploration, but cannot detect obstacles that are not ground-based. Although devices have been developed to provide information above waist height, these do not provide auditory interfaces that are easy to learn. Development of such devices should adapt to the user, not require adaptation by the user. Can obstacle avoidance be achieved through direct perception? This research presents an auditory interface that has been designed with the user as the primary focus. An analysis of the tasks required has been taken into account resulting in an interface that audifies ultrasound. Audification provides intuitive information to the user to enable perceptive response to environmental obstacles. A device was developed that provides Doppler shift signals that are audible as a result of intentional aliasing. This system provides acoustic flow that is evident upon initiation of travel and has been shown to be effective in perceiving apertures and avoiding environmental obstacles. The orientation of receivers on this device was also examined, resulting in better distance perception and centreline accuracy when oriented outward as compared to forward. The design of this novel user interface for visually impaired individuals has also provided a tool that can be used to evaluate direct perception and acoustic flow in a manner that has never been studied before

    Autonomous navigation and mapping of mobile robots based on 2D/3D cameras combination

    Get PDF
    Aufgrund der tendenziell zunehmenden Nachfrage an Systemen zur UnterstĂŒtzung des alltĂ€glichen Lebens gibt es derzeit ein großes Interesse an autonomen Systemen. Autonome Systeme werden in HĂ€usern, BĂŒros, Museen sowie in Fabriken eingesetzt. Sie können verschiedene Aufgaben erledigen, beispielsweise beim Reinigen, als Helfer im Haushalt, im Bereich der Sicherheit und Bildung, im Supermarkt sowie im Empfang als Auskunft, weil sie dazu verwendet werden können, die Verarbeitungszeit zu kontrollieren und prĂ€zise, zuverlĂ€ssige Ergebnisse zu liefern. Ein Forschungsgebiet autonomer Systeme ist die Navigation und Kartenerstellung. Das heißt, mobile Roboter sollen selbstĂ€ndig ihre Aufgaben erledigen und zugleich eine Karte der Umgebung erstellen, um navigieren zu können. Das Hauptproblem besteht darin, dass der mobile Roboter in einer unbekannten Umgebung, in der keine zusĂ€tzlichen Bezugsinformationen vorhanden sind, das GelĂ€nde erkunden und eine dreidimensionale Karte davon erstellen muss. Der Roboter muss seine Positionen innerhalb der Karte bestimmen. Es ist notwendig, ein unterscheidbares Objekt zu finden. Daher spielen die ausgewĂ€hlten Sensoren und der Register-Algorithmus eine relevante Rolle. Die Sensoren, die sowohl Tiefen- als auch Bilddaten liefern können, sind noch unzureichend. Der neue 3D-Sensor, nĂ€mlich der "Photonic Mixer Device" (PMD), erzeugt mit hoher Bildwiederholfrequenz eine Echtzeitvolumenerfassung des umliegenden Szenarios und liefert Tiefen- und Graustufendaten. Allerdings erfordert die höhere QualitĂ€t der dreidimensionalen Erkundung der Umgebung Details und Strukturen der OberflĂ€chen, die man nur mit einer hochauflösenden CCD-Kamera erhalten kann. Die vorliegende Arbeit prĂ€sentiert somit eine Exploration eines mobilen Roboters mit Hilfe der Kombination einer CCD- und PMD-Kamera, um eine dreidimensionale Karte der Umgebung zu erstellen. Außerdem wird ein Hochleistungsalgorithmus zur Erstellung von 3D Karten und zur PoseschĂ€tzung in Echtzeit unter Verwendung des "Simultaneous Localization and Mapping" (SLAM) Verfahrens prĂ€sentiert. Der autonom arbeitende, mobile Roboter soll ferner Aufgaben ĂŒbernehmen, wie z.B. die Erkennung von Objekten in ihrer Umgebung, um verschiedene praktische Aufgaben zu lösen. Die visuellen Daten der CCD-Kamera liefern nicht nur eine hohe Auflösung der Textur-Daten fĂŒr die Tiefendaten, sondern werden auch fĂŒr die Objekterkennung verwendet. Der "Iterative Closest Point" (ICP) Algorithmus benutzt zwei Punktwolken, um den Bewegungsvektor zu bestimmen. Schließlich sind die Auswertung der Korrespondenzen und die Rekonstruktion der Karte, um die reale Umgebung abzubilden, in dieser Arbeit enthalten.Presently, intelligent autonomous systems have to perform very interesting tasks due to trendy increases in support demands of human living. Autonomous systems have been used in various applications like houses, offices, museums as well as in factories. They are able to operate in several kinds of applications such as cleaning, household assistance, transportation, security, education and shop assistance because they can be used to control the processing time, and to provide precise and reliable output. One research field of autonomous systems is mobile robot navigation and map generation. That means the mobile robot should work autonomously while generating a map, which the robot follows. The main issue is that the mobile robot has to explore an unknown environment and to generate a three dimensional map of an unknown environment in case that there is not any further reference information. The mobile robot has to estimate its position and pose. It is required to find distinguishable objects. Therefore, the selected sensors and registered algorithms are significant. The sensors, which can provide both, depth as well as image data are still deficient. A new 3D sensor, namely the Photonic Mixer Device (PMD), generates a high rate output in real-time capturing the surrounding scenario as well as the depth and gray scale data. However, a higher quality of three dimension explorations requires details and textures of surfaces, which can be obtained from a high resolution CCD camera. This work hence presents the mobile robot exploration using the integration of CCD and PMD camera in order to create a three dimensional map. In addition, a high performance algorithm for 3D mapping and pose estimation of the locomotion in real time, using the "Simultaneous Localization and Mapping" (SLAM) technique is proposed. The flawlessly mobile robot should also handle the tasks, such as the recognition of objects in its environment, in order to achieve various practical missions. Visual input from the CCD camera not only delivers high resolution texture data on depth volume, but is also used for object recognition. The “Iterative Closest Point” (ICP) algorithm is using two sets of points to find out the translation and rotation vector between two scans. Finally, the evaluation of the correspondences and the reconstruction of the map to resemble the real environment are included in this thesis

    Collaborative autonomy in heterogeneous multi-robot systems

    Get PDF
    As autonomous mobile robots become increasingly connected and widely deployed in different domains, managing multiple robots and their interaction is key to the future of ubiquitous autonomous systems. Indeed, robots are not individual entities anymore. Instead, many robots today are deployed as part of larger fleets or in teams. The benefits of multirobot collaboration, specially in heterogeneous groups, are multiple. Significantly higher degrees of situational awareness and understanding of their environment can be achieved when robots with different operational capabilities are deployed together. Examples of this include the Perseverance rover and the Ingenuity helicopter that NASA has deployed in Mars, or the highly heterogeneous robot teams that explored caves and other complex environments during the last DARPA Sub-T competition. This thesis delves into the wide topic of collaborative autonomy in multi-robot systems, encompassing some of the key elements required for achieving robust collaboration: solving collaborative decision-making problems; securing their operation, management and interaction; providing means for autonomous coordination in space and accurate global or relative state estimation; and achieving collaborative situational awareness through distributed perception and cooperative planning. The thesis covers novel formation control algorithms, and new ways to achieve accurate absolute or relative localization within multi-robot systems. It also explores the potential of distributed ledger technologies as an underlying framework to achieve collaborative decision-making in distributed robotic systems. Throughout the thesis, I introduce novel approaches to utilizing cryptographic elements and blockchain technology for securing the operation of autonomous robots, showing that sensor data and mission instructions can be validated in an end-to-end manner. I then shift the focus to localization and coordination, studying ultra-wideband (UWB) radios and their potential. I show how UWB-based ranging and localization can enable aerial robots to operate in GNSS-denied environments, with a study of the constraints and limitations. I also study the potential of UWB-based relative localization between aerial and ground robots for more accurate positioning in areas where GNSS signals degrade. In terms of coordination, I introduce two new algorithms for formation control that require zero to minimal communication, if enough degree of awareness of neighbor robots is available. These algorithms are validated in simulation and real-world experiments. The thesis concludes with the integration of a new approach to cooperative path planning algorithms and UWB-based relative localization for dense scene reconstruction using lidar and vision sensors in ground and aerial robots

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Autonomous navigation and multi-sensorial real-time mocalization for a mobile robot

    Get PDF
    Doutoramento em Engenharia MecĂąnicaO principio por detrĂĄs da proposta desta tese Ă© a navegação de ambientes utilizando uma sequĂȘncia de instruçÔes condicionadas nas observaçÔes feitas pelo robĂŽ. Esta sequĂȘncia Ă© denominada como uma 'missĂŁo de navegação'. A interacção com um robĂŽ atravĂ©s de missĂ”es permitirĂĄ uma interface mais eficaz com humanos e a navegação de ambientes de maior escala e duma forma mais simplificada. No entanto, esta abordagem abre problemas novos no que diz respeito Ă  forma como os dados sensoriais devem ser representados e utilizados. Neste trabalho representaçÔes binĂĄrias foram introduzidas para facilitar a integração dos dados multi-sensoriais, a dimensionalidade da qual foi reduzida atravĂ©s da utilização de Misturas de DistribuiçÔes de tipo Bernoulli. Foi tambĂ©m aplicada a tĂ©cnica de cadeias de Markov ocultas (Hidden Markov Models), que contou com o desenvolvimento e a utilização dum modelo de cadeia de Markov original, esta que consegue explorar a informação contextual da sequĂȘncia da missĂŁo. Uma aplicação que surgiu da aplicação do mĂ©todo de localização foi a criação de representaçÔes topologicas do ambiente sem ter que previamente recorrer Ă  criação de mapas geomĂ©tricos. Outras contribuiçÔes incluem a aplicação de mĂ©todos para a extracção de propriedades locais em imagens e o desenvolvimento de propriedades extraĂ­das a partir de varrimentos dum medidor de distancia laser.This thesis evaluates the requisites for the specification of mobile robot 'Missions' for navigation within environments that are typically used by human beings. The principal idea behind the proposal of this thesis was to allow localization and navigation by providing a sequence of instructions, the execution of each instruction being conditional on the expected sensor data. This approach to navigation is expected to lead to new applications which will include the autonomous navigation of environments of very large scale. It is also expected to lead to a more intuitive interaction between mobile robots and humans. However, the concept of the navigation Mission opens up new problems namely in the way in which the sequence of instructions and the expected observations are to be represented. To solve this problem, binary features were used to integrate observations from multiple sensors, the dimensionality of which was reduced by modelling the binary data as a Finite Mixture Model comprised of Bernoulli distributions. Another original contribution was the modification of the Markov Chains used in Hidden Markov Models to enable the use of the sequential context in which the expected observations are specified in the navigation Mission. The localization method that was developed enabled the direct creation of a topological representation of an environment without recourse to an intermediate geometric map. Other contributions include developments that were made in the characterisation of images through the application of local features and of laser range scans through the creation of original features based on the scan contour and free-area properties

    Resilient Perception for Outdoor Unmanned Ground Vehicles

    Get PDF
    This thesis promotes the development of resilience for perception systems with a focus on Unmanned Ground Vehicles (UGVs) in adverse environmental conditions. Perception is the interpretation of sensor data to produce a representation of the environment that is necessary for subsequent decision making. Long-term autonomy requires perception systems that correctly function in unusual but realistic conditions that will eventually occur during extended missions. State-of-the-art UGV systems can fail when the sensor data are beyond the operational capacity of the perception models. The key to resilient perception system lies in the use of multiple sensor modalities and the pre-selection of appropriate sensor data to minimise the chance of failure. This thesis proposes a framework based on diagnostic principles to evaluate and preselect sensor data prior to interpretation by the perception system. Image-based quality metrics are explored and evaluated experimentally using infrared (IR) and visual cameras onboard a UGV in the presence of smoke and airborne dust. A novel quality metric, Spatial Entropy (SE), is introduced and evaluated. The proposed framework is applied to a state-of-the-art Visual-SLAM algorithm combining visual and IR imaging as a real-world example. An extensive experimental evaluation demonstrates that the framework allows for camera-based localisation that is resilient to a range of low-visibility conditions when compared to other methods that use a single sensor or combine sensor data without selection. The proposed framework allows for a resilient localisation in adverse conditions using image data but also has significant potential to benefit many perception applications. Employing multiple sensing modalities along with pre-selection of appropriate data is a powerful method to create resilient perception systems by anticipating and mitigating errors. The development of such resilient perception systems is a requirement for next-generation outdoor UGVs
    corecore