77 research outputs found

    Probabilistic Human-Robot Information Fusion

    Get PDF
    This thesis is concerned with combining the perceptual abilities of mobile robots and human operators to execute tasks cooperatively. It is generally agreed that a synergy of human and robotic skills offers an opportunity to enhance the capabilities of today’s robotic systems, while also increasing their robustness and reliability. Systems which incorporate both human and robotic information sources have the potential to build complex world models, essential for both automated and human decision making. In this work, humans and robots are regarded as equal team members who interact and communicate on a peer-to-peer basis. Human-robot communication is addressed using probabilistic representations common in robotics. While communication can in general be bidirectional, this work focuses primarily on human-to-robot information flow. More specifically, the approach advocated in this thesis is to let robots fuse their sensor observations with observations obtained from human operators. While robotic perception is well-suited for lower level world descriptions such as geometric properties, humans are able to contribute perceptual information on higher abstraction levels. Human input is translated into the machine representation via Human Sensor Models. A common mathematical framework for humans and robots reinforces the notion of true peer-to-peer interaction. Human-robot information fusion is demonstrated in two application domains: (1) scalable information gathering, and (2) cooperative decision making. Scalable information gathering is experimentally demonstrated on a system comprised of a ground vehicle, an unmanned air vehicle, and two human operators in a natural environment. Information from humans and robots was fused in a fully decentralised manner to build a shared environment representation on multiple abstraction levels. Results are presented in the form of information exchange patterns, qualitatively demonstrating the benefits of human-robot information fusion. The second application domain adds decision making to the human-robot task. Rational decisions are made based on the robots’ current beliefs which are generated by fusing human and robotic observations. Since humans are considered a valuable resource in this context, operators are only queried for input when the expected benefit of an observation exceeds the cost of obtaining it. The system can be seen as adjusting its autonomy at run-time based on the uncertainty in the robots’ beliefs. A navigation task is used to demonstrate the adjustable autonomy system experimentally. Results from two experiments are reported: a quantitative evaluation of human-robot team effectiveness, and a user study to compare the system to classical teleoperation. Results show the superiority of the system with respect to performance, operator workload, and usability

    Probabilistic stable motion planning with stability uncertainty for articulated vehicles on challenging terrains

    Full text link
    © 2015, Springer Science+Business Media New York. A probabilistic stable motion planning strategy applicable to reconfigurable robots is presented in this paper. The methodology derives a novel statistical stability criterion from the cumulative distribution of a tip-over metric. The measure is dynamically updated with imprecise terrain information, localization and robot kinematics to plan safety-constrained paths which simultaneously allow the widest possible visibility of the surroundings by simultaneously assuming highest feasible vantage robot configurations. The proposed probabilistic stability metric allows more conservative poses through areas with higher levels of uncertainty, while avoiding unnecessary caution in poses assumed at well-known terrain sections. The implementation with the well known grid based A* algorithm and also a sampling based RRT planner are presented. The validity of the proposed approach is evaluated with a multi-tracked robot fitted with a manipulator arm and a range camera using two challenging elevation terrains data sets: one obtained whilst operating the robot in a mock-up urban search and rescue arena, and the other from a publicly available dataset of a quasi-outdoor rover testing facility

    Design, testing and validation of model predictive control for an unmanned ground vehicle

    Full text link
    The rapid increase in designing, manufacturing, and using autonomous robots has attracted numerous researchers and industries in recent decades. The logical motivation behind this interest is the wide range of applications. For instance, perimeter surveillance, search and rescue missions, agriculture, and construction. In this thesis, motion planning and control based on model predictive control (MPC) for unmanned ground vehicles (UGVs) is tackled. In addition, different variants of MPC are designed, analysed, and implemented for such non-holonomic systems. It is imperative to focus on the ability of MPC to handle constraints as one of the motivations. Furthermore, the proliferation of computer processing enables these systems to work in a real-time scenario. The controller's responsibility is to guarantee an accurate trajectory tracking process to deal with other specifications usually not considered or solved by the planner. However, the separation between planner and controller is not necessarily defined uniquely, even though it can be a hybrid process, as seen in part of this thesis. Firstly, a robust MPC is designed and implemented for a small-scale autonomous bulldozer in the presence of uncertainties, which uses an optimal control action and a feed-forward controller to suppress these uncertainties. More precisely, a linearised variant of MPC is deployed to solve the trajectory tracking problem of the vehicle. Afterwards, a nonlinear MPC is designed and implemented to solve the path-following problem of the UGV for masonry in a construction context, where longitudinal velocity and yaw rate are employed as control inputs to the platform. For both the control techniques, several experiments are performed to validate the robustness and accuracy of the proposed scheme. Those experiments are performed under realistic localisation accuracy, provided by a typical localiser. Most conspicuously, a novel proximal planning and control strategy is implemented in the presence of skid-slip and dynamic and static collision avoidance for the posture control and tracking control problems. The ability to operate in moving objects is critical for UGVs to function well. The approach offers specific planning capabilities, able to deal at high frequency with context characteristics, which the higher-level planner may not well solve. Those context characteristics are related to dynamic objects and other terrain details detected by the platform's onboard perception capabilities. In the control context, proximal and interior-point optimisation methods are used for MPC. Relevant attention is given to the processing time required by the MPC process to obtain the control actions at each actual control time. This concern is due to the need to optimise each control action, which must be calculated and applied in real-time. Because the length of a prediction horizon is critical in practical applications, it is worth looking into in further detail. In another study, the accuracies of robust and nonlinear model predictive controllers are compared. Finally, a hybrid controller is proposed and implemented. This approach exploits the availability of a simplified cost-to-go function (which is provided by a higher-level planner); thus, the hybrid approach fuses, in real-time, the nominal CTG function (nominal terrain map) with the rest of the critical constraints, which the planner usually ignores. The conducted research fills necessary gaps in the application areas of MPC and UGVs. Both theoretical and practical contributions have been made in this thesis. Moreover, extensive simulations and experiments are performed to test and verify the working of MPC with a reasonable processing capability of the onboard process

    Autonomous : Semi-autonomous Navigation System of a Wheelchair by Active Ultrasonic Beacons

    Get PDF
    金æȹ性歊淄歊

    Characterisation of a nuclear cave environment utilising an autonomous swarm of heterogeneous robots

    Get PDF
    As nuclear facilities come to the end of their operational lifetime, safe decommissioning becomes a more prevalent issue. In many such facilities there exist ‘nuclear caves’. These caves constitute areas that may have been entered infrequently, or even not at all, since the construction of the facility. Due to this, the topography and nature of the contents of these nuclear caves may be unknown in a number of critical aspects, such as the location of dangerous substances or significant physical blockages to movement around the cave. In order to aid safe decommissioning, autonomous robotic systems capable of characterising nuclear cave environments are desired. The research put forward in this thesis seeks to answer the question: is it possible to utilise a heterogeneous swarm of autonomous robots for the remote characterisation of a nuclear cave environment? This is achieved through examination of the three key components comprising a heterogeneous swarm: sensing, locomotion and control. It will be shown that a heterogeneous swarm is not only capable of performing this task, it is preferable to a homogeneous swarm. This is due to the increased sensory and locomotive capabilities, coupled with more efficient explorational prowess when compared to a homogeneous swarm

    SystÚmes de localisation en temps réel basés sur les réseaux de communication sans fil

    Get PDF
    Des techniques fiables de radiolocalisation s’avĂšrent indispensables au dĂ©veloppement d’un grand nombre de nouveaux systĂšmes pertinents. Les techniques de localisation basĂ©es sur les rĂ©seaux de communication sans-fil (WNs) sont particuliĂšrement adĂ©quates aux espaces confinĂ©s et fortement urbanisĂ©s. Le prĂ©sent projet de recherche s’intĂ©resse aux systĂšmes de localisation en temps rĂ©el (RTLS) basĂ©s sur les technologies de communication sans-fil existantes. Deux nouvelles techniques de radiolocalisation alternatives sont proposĂ©es pour amĂ©liorer la prĂ©cision de positionnement des nƓuds sans-fil mobiles par rapport aux mĂ©thodes conventionnelles basĂ©es sur la puissance des signaux reçus (RSS). La premiĂšre mĂ©thode de type gĂ©omĂ©trique propose une nouvelle mĂ©trique de compensation entre les puissances de signaux reçus par rapport Ă  des paires de stations rĂ©ceptrices fixes. L’avantage de cette technique est de rĂ©duire l’effet des variations du milieu de propagation et des puissances d’émission des signaux sur la prĂ©cision de localisation. La mĂȘme mĂ©trique est sĂ©lectionnĂ©e pour former les signatures utilisĂ©es pour crĂ©er la carte radio de l’environnement de localisation durant la phase hors-ligne dans la deuxiĂšme mĂ©thode de type analyse de situation. Durant la phase de localisation en temps rĂ©el, la technique d’acquisition comprimĂ©e (CS) est appliquĂ©e pour retrouver les positions des nƓuds mobiles Ă  partir d’un nombre rĂ©duit d’échantillons de signaux reçus en les comparant Ă  la carte radio prĂ©Ă©tablie. Le calcul d’algĂšbre multilinĂ©aire proposĂ© dans ce travail permet l’utilisation de ce type de mĂ©trique ternaire, Ă©quivalemment la diffĂ©rence des temps d’arrivĂ©e (TDOA), pour calculer les positions des cibles selon la technique de CS. Les deux mĂ©thodes sont ensuite validĂ©es par des simulations et des expĂ©rimentations effectuĂ©es dans des environnements Ă  deux et Ă  trois dimensions. Les expĂ©riences ont Ă©tĂ© menĂ©es dans un bĂątiment multi-Ă©tages (MFB) en utilisant l’infrastructure sans-fil existante pour retrouver conjointement la position et l’étage des cibles en utilisant les techniques proposĂ©es. Un exemple emblĂ©matique de l’application des RTLS dans les zones urbaines est celui des systĂšmes de transport intelligents (ITS) pour amĂ©liorer la sĂ©curitĂ© routiĂšre. Ce projet s’intĂ©resse Ă©galement Ă  la performance d’une application de sĂ©curitĂ© des piĂ©tons au niveau des intersections routiĂšres. L’accomplissement d’un tel systĂšme repose sur l’échange fiable, sous des contraintes temporelles sĂ©vĂšres, des donnĂ©es de positionnement gĂ©ographique entre nƓuds mobiles pour se tenir mutuellement informĂ©s de leurs prĂ©sences et positions afin de prĂ©venir les risques de collision. Ce projet mĂšne une Ă©tude comparative entre deux architectures d’un systĂšme ITS permettant la communication entre piĂ©tons et vĂ©hicules, directement et via une unitĂ© de l’infrastructure, conformĂ©ment aux standards de communication dans les rĂ©seaux ad hoc vĂ©hiculaires (VANETs)

    Cartographie, localisation et planification simultanées ‘en ligne’, à long terme et à grande échelle pour robot mobile

    Get PDF
    Pour être en mesure de naviguer dans des endroits inconnus et non structurés, un robot doit pouvoir cartographier l’environnement afin de s’y localiser. Ce problème est connu sous le nom de cartographie et localisation simultanées (ou SLAM pour Simultaneous Localization and Mapping). Une fois la carte de l’environnement créée, des tâches requérant un déplacement d’un endroit connu à un autre peuvent ainsi être planifiées. La charge de calcul du SLAM est dépendante de la grandeur de la carte. Un robot a une puissance de calcul embarquée limitée pour arriver à traiter l’information ‘en ligne’, c’est-à-dire à bord du robot avec un temps de traitement des données moins long que le temps d’acquisition des données ou le temps maximal permis de mise à jour de la carte. La navigation du robot tout en faisant le SLAM est donc limitée par la taille de l’environnement à cartographier. Pour résoudre cette problématique, l’objectif est de développer un algorithme de SPLAM (Simultaneous Planning Localization and Mapping) permettant la navigation peu importe la taille de l’environment. Pour gérer efficacement la charge de calcul de cet algorithme, la mémoire du robot est divisée en une mémoire de travail et une mémoire à long terme. Lorsque la contrainte de traitement ‘en ligne’ est atteinte, les endroits vus les moins souvent et qui ne sont pas utiles pour la navigation sont transférées de la mémoire de travail à la mémoire à long terme. Les endroits transférés dans la mémoire à long terme ne sont plus utilisés pour la navigation. Cependant, ces endroits transférés peuvent être récupérées de la mémoire à long terme à la mémoire de travail lorsque le le robot s’approche d’un endroit voisin encore dans la mémoire de travail. Le robot peut ainsi se rappeler incrémentalement d’une partie de l’environment a priori oubliée afin de pouvoir s’y localiser pour le suivi de trajectoire. L’algorithme, nommé RTAB-Map, a été testé sur le robot AZIMUT-3 dans une première expérience de cartographie sur cinq sessions indépendantes, afin d’évaluer la capacité du système à fusionner plusieurs cartes ‘en ligne’. La seconde expérience, avec le même robot utilisé lors de onze sessions totalisant 8 heures de déplacement, a permis d’évaluer la capacité du robot de naviguer de façon autonome tout en faisant du SLAM et planifier des trajectoires continuellement sur une longue période en respectant la contrainte de traitement ‘en ligne’ . Enfin, RTAB-Map est comparé à d’autres systèmes de SLAM sur quatre ensembles de données populaires pour des applications de voiture autonome (KITTI), balayage à la main avec une caméra RGB-D (TUM RGB-D), de drone (EuRoC) et de navigation intérieur avec un robot PR2 (MIT Stata Center). Les résultats montrent que RTAB-Map peut être utilisé sur de longue période de temps en navigation autonome tout en respectant la contrainte de traitement ‘en ligne’ et avec une qualité de carte comparable aux approches de l’état de l’art en SLAM visuel et avec télémètre laser. ll en résulte d’un logiciel libre déployé dans une multitude d’applications allant des robots mobiles intérieurs peu coûteux aux voitures autonomes, en passant par les drones et la modélisation 3D de l’intérieur d’une maison

    New approaches to the emerging social neuroscience of human-robot interaction

    Get PDF
    Prehistoric art, like the Venus of Willendorf sculpture, shows that we have always looked for ways to distil fundamental human characteristics and capture them in physically embodied representations of the self. Recently, this undertaking has gained new momentum through the introduction of robots that resemble humans in their shape and their behaviour. These social robots are envisioned to take on important roles: alleviate loneliness, support vulnerable children and serve as helpful companions for the elderly. However, to date, few commercially available social robots are living up to these expectations. Given their importance for an ever older and more socially isolated society, rigorous research at the intersection of psychology, social neuroscience and human-robot interaction is needed to determine to which extent mechanisms active during human-human interaction can be co-opted when we encounter social robots. This thesis takes an anthropocentric approach to answering the question how socially motivated we are to interact with humanoid robots. Across three empirical and one theoretical chapter, I use self-report, behavioural and neural measures relevant to the study of interactions with robots to address this question. With the Social Motivation Theory of Autism as a point of departure, the first empirical chapter (Chapter 3) investigates the relevance of interpersonal synchrony for human-robot interaction. This chapter reports a null effect: participants did not find a robot that synchronised its movement with them on a drawing task more likeable, nor were they more motivated to ask it more questions in a semi-structured interaction scenario. As this chapter heavily relies on self-report as a main outcome measure, Chapter 4 addresses this limitation by adapting an established behavioural paradigm for the study of human-robot interaction. This chapter shows that a failure to conceptually extend an effect in the field of social attentional capture calls for a different approach when seeking to adapt paradigms for HRI. Chapter 5 serves as a moment of reflection on the current state-of-the-art research at the intersection of neuroscience and human-robot interaction. Here, I argue that the future of HRI research will rely on interaction studies with mobile brain imaging systems (like functional near-infrared spectroscopy) that allow data collection during embodied encounters with social robots. However, going forward, the field should slowly and carefully move outside of the lab and into real situations with robots. As the previous chapters have established, well-known effects have to be replicated before they are implemented for robots, and before they are taken out of the lab, into real life. The final empirical chapter (Chapter 6), takes the first step of this proposed slow approach: in addition to establishing the detection rate of a mobile fNIRS system in comparison to fMRI, this chapter contributes a novel way to digitising optode positions by means of photogrammetry. In the final chapter of this thesis, I highlight the main lessons learned conducting studies with social robots. I propose an updated roadmap which takes into account the problems raised in this thesis and emphasise the importance of incorporating more open science practices going forward. Various tools that emerged out of the open science movement will be invaluable for researchers working on this exciting, interdisciplinary endeavour

    Attention Network for 3D Object Detection in Point Clouds

    Get PDF
    International audienceAccurate detection of objects in 3D point clouds is a central problem for autonomous navigation. Most existing methods use techniques of handcrafted features representation or multi-modal approaches prone to sensor failure. Approaches like PointNet that directly operate on sparse point data have shown good accuracy in the classification of single 3D objects. However, LiDAR sensors on Autonomous vehicles generate a large scale pointcloud. Real-time object detection in such a cluttered environment still remains a challenge. In this thesis, we propose Attentional PointNet, a novel end-toend trainable deep architecture for object detection in point clouds. We extend the theory of visual attention mechanism to 3D point clouds and introduce a new recurrent 3D Spatial Transformer Network module. Rather than processing whole point cloud, the network learns "where to look" (find regions of interest), thus significantly reducing the number of points and hence, inference time. Evaluation on KITTI car detection benchmark shows that our Attentional PointNet is notably faster and achieves comparable results with state-of-the-art LiDAR-based 3D detection methods.La dĂ©tection prĂ©cise d’objets dans un nuage de points 3D est un problĂšme central pour la navigation autonome. La plupart des mĂ©thodes existantes utilisent des caractĂ©ristiques sĂ©lectionnĂ©es Ă  la main ou des approches multimodĂšlessujettes Ă  une dĂ©faillance du capteur. Des approches, telles que PointNet fonctionnant directement sur des donnĂ©es ponctuelles Ă©parses, classifient prĂ©cisĂ©ment un nuage de points associĂ© Ă  un unique objet. Cependant, les capteurs Lidars sur les vĂ©hicules autonomes gĂ©nĂšrent un nuage de points contenant de nombreux objets. Leurs dĂ©tections en temps rĂ©el dans un environnement aussi encombrĂ© restent un dĂ©fi. Dans cette thĂšse, nous proposons une mĂ©thode appelĂ©e Attentional PointNet, une architecture profonde complĂšte, formable de bout en bout, destinĂ©e Ă  la dĂ©tection d’objets dans le nuage de points. Nous Ă©tendons la thĂ©orie du mĂ©canisme d’attention visuelle au nuage de points 3D et introduisons un nouveau module rĂ©current de rĂ©seau de transformateur spatial 3D. PlutĂŽt que de traiter le nuage de points dans sont ensemble, il apprend Ă  reconnaĂźtre des rĂ©gions potentiellement intĂ©ressantes. Ensuite, localiser des objets dans ces rĂ©gions rĂ©duit considĂ©rablement le nombre de points Ă  traiter et rĂ©duit le temps de calcul. L’évaluation avec les donnĂ©es du jeu de donnĂ©es KITTI montre que notre mĂ©thode est plus rapide et permet d’obtenir des rĂ©sultats comparables avec les mĂ©thodes classiques de dĂ©tection 3D utilisant des nuages de points gĂ©nĂ©rĂ©s par des Lidars
    • 

    corecore