157 research outputs found

    Incorporating prior knowledge into deep neural network controllers of legged robots

    Get PDF

    Modeling Human Motor Skills to Enhance Robots’ Physical Interaction

    Get PDF
    The need for users’ safety and technology acceptability has incredibly increased with the deployment of co-bots physically interacting with humans in industrial settings, and for people assistance. A well-studied approach to meet these requirements is to ensure human-like robot motions and interactions. In this manuscript, we present a research approach that moves from the understanding of human movements and derives usefull guidelines for the planning of arm movements and the learning of skills for physical interaction of robots with the surrounding environment

    Visual Perception For Robotic Spatial Understanding

    Get PDF
    Humans understand the world through vision without much effort. We perceive the structure, objects, and people in the environment and pay little direct attention to most of it, until it becomes useful. Intelligent systems, especially mobile robots, have no such biologically engineered vision mechanism to take for granted. In contrast, we must devise algorithmic methods of taking raw sensor data and converting it to something useful very quickly. Vision is such a necessary part of building a robot or any intelligent system that is meant to interact with the world that it is somewhat surprising we don\u27t have off-the-shelf libraries for this capability. Why is this? The simple answer is that the problem is extremely difficult. There has been progress, but the current state of the art is impressive and depressing at the same time. We now have neural networks that can recognize many objects in 2D images, in some cases performing better than a human. Some algorithms can also provide bounding boxes or pixel-level masks to localize the object. We have visual odometry and mapping algorithms that can build reasonably detailed maps over long distances with the right hardware and conditions. On the other hand, we have robots with many sensors and no efficient way to compute their relative extrinsic poses for integrating the data in a single frame. The same networks that produce good object segmentations and labels in a controlled benchmark still miss obvious objects in the real world and have no mechanism for learning on the fly while the robot is exploring. Finally, while we can detect pose for very specific objects, we don\u27t yet have a mechanism that detects pose that generalizes well over categories or that can describe new objects efficiently. We contribute algorithms in four of the areas mentioned above. First, we describe a practical and effective system for calibrating many sensors on a robot with up to 3 different modalities. Second, we present our approach to visual odometry and mapping that exploits the unique capabilities of RGB-D sensors to efficiently build detailed representations of an environment. Third, we describe a 3-D over-segmentation technique that utilizes the models and ego-motion output in the previous step to generate temporally consistent segmentations with camera motion. Finally, we develop a synthesized dataset of chair objects with part labels and investigate the influence of parts on RGB-D based object pose recognition using a novel network architecture we call PartNet

    A Riemannian Take on Human Motion Analysis and Retargeting

    Full text link
    Dynamic motions of humans and robots are widely driven by posture-dependent nonlinear interactions between their degrees of freedom. However, these dynamical effects remain mostly overlooked when studying the mechanisms of human movement generation. Inspired by recent works, we hypothesize that human motions are planned as sequences of geodesic synergies, and thus correspond to coordinated joint movements achieved with piecewise minimum energy. The underlying computational model is built on Riemannian geometry to account for the inertial characteristics of the body. Through the analysis of various human arm motions, we find that our model segments motions into geodesic synergies, and successfully predicts observed arm postures, hand trajectories, as well as their respective velocity profiles. Moreover, we show that our analysis can further be exploited to transfer arm motions to robots by reproducing individual human synergies as geodesic paths in the robot configuration space.Comment: Accepted for publication in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 202

    On the Combination of Game-Theoretic Learning and Multi Model Adaptive Filters

    Get PDF
    This paper casts coordination of a team of robots within the framework of game theoretic learning algorithms. In particular a novel variant of fictitious play is proposed, by considering multi-model adaptive filters as a method to estimate other players’ strategies. The proposed algorithm can be used as a coordination mechanism between players when they should take decisions under uncertainty. Each player chooses an action after taking into account the actions of the other players and also the uncertainty. Uncertainty can occur either in terms of noisy observations or various types of other players. In addition, in contrast to other game-theoretic and heuristic algorithms for distributed optimisation, it is not necessary to find the optimal parameters a priori. Various parameter values can be used initially as inputs to different models. Therefore, the resulting decisions will be aggregate results of all the parameter values. Simulations are used to test the performance of the proposed methodology against other game-theoretic learning algorithms.</p

    American socio-politics in fictional context: transformers and the representation of the United States

    Get PDF
    The fictional narratives that have been developed for the support of the Transformers brand, with their underlying emphasis on the sale of action figures, have been dismissed as a somewhat juvenile and uninteresting text. Little to no serious academic analysis of any of the various iterations of the franchise has been undertaken. With this thesis, I endeavour to begin that analysis and thus broaden and problematize the currently limited understanding of Transformers fictions. Due to the franchise’s vast nature, I focus on the original animation (1984-1987) and the recent live-action movies (2007-2011) with their attempts to offer a representation of the contemporaneous sociopolitical environment in America at the time of their production. In order to undertake this study, I combine my background in political analysis with film and media studies to seek out and explore the political themes and commentaries present in the key areas of political philosophy, technological change, the depiction of politicians, gender and sexuality, and America’s international role. Through this analysis of the franchise I shall construct an argument that Transformers is a complex narrative, replete with socio-political allegory that offers a representation of the United States and a view of itself in the arenas of domestic and global politics

    Human-aware space sharing and navigation for an interactive robot

    Get PDF
    Les méthodes de planification de mouvements robotiques se sont développées à un rythme accéléré ces dernières années. L'accent a principalement été mis sur le fait de rendre les robots plus efficaces, plus sécurisés et plus rapides à réagir à des situations imprévisibles. En conséquence, nous assistons de plus en plus à l'introduction des robots de service dans notre vie quotidienne, en particulier dans les lieux publics tels que les musées, les centres commerciaux et les aéroports. Tandis qu'un robot de service mobile se déplace dans l'environnement humain, il est important de prendre en compte l'effet de son comportement sur les personnes qu'il croise ou avec lesquelles il interagit. Nous ne les voyons pas comme de simples machines, mais comme des agents sociaux et nous nous attendons à ce qu'ils se comportent de manière similaire à l'homme en suivant les normes sociétales comme des règles. Ceci a créé de nouveaux défis et a ouvert de nouvelles directions de recherche pour concevoir des algorithmes de commande de robot, qui fournissent des comportements de robot acceptables, lisibles et proactifs. Cette thèse propose une méthode coopérative basée sur l'optimisation pour la planification de trajectoire et la navigation du robot avec des contraintes sociales intégrées pour assurer des mouvements de robots prudents, conscients de la présence de l'être humain et prévisibles. La trajectoire du robot est ajustée dynamiquement et continuellement pour satisfaire ces contraintes sociales. Pour ce faire, nous traitons la trajectoire du robot comme une bande élastique (une construction mathématique représentant la trajectoire du robot comme une série de positions et une différence de temps entre ces positions) qui peut être déformée (dans l'espace et dans le temps) par le processus d'optimisation pour respecter les contraintes données. De plus, le robot prédit aussi les trajectoires humaines plausibles dans la même zone d'exploitation en traitant les chemins humains aussi comme des bandes élastiques. Ce système nous permet d'optimiser les trajectoires des robots non seulement pour le moment présent, mais aussi pour l'interaction entière qui se produit lorsque les humains et les robots se croisent les uns les autres. Nous avons réalisé un ensemble d'expériences avec des situations interactives humains-robots qui se produisent dans la vie de tous les jours telles que traverser un couloir, passer par une porte et se croiser sur de grands espaces ouverts. La méthode de planification coopérative proposée se compare favorablement à d'autres schémas de planification de la navigation à la pointe de la technique. Nous avons augmenté le comportement de navigation du robot avec un mouvement synchronisé et réactif de sa tête. Cela permet au robot de regarder où il va et occasionnellement de détourner son regard vers les personnes voisines pour montrer que le robot va éviter toute collision possible avec eux comme prévu par le planificateur. À tout moment, le robot pondère les multiples critères selon le contexte social et décide de ce vers quoi il devrait porter le regard. Grâce à une étude utilisateur en ligne, nous avons montré que ce mécanisme de regard complète efficacement le comportement de navigation ce qui améliore la lisibilité des actions du robot. Enfin, nous avons intégré notre schéma de navigation avec un système de supervision plus large qui peut générer conjointement des comportements du robot standard tel que l'approche d'une personne et l'adaptation de la vitesse du robot selon le groupe de personnes que le robot guide dans des scénarios d'aéroport ou de musée.The methods of robotic movement planning have grown at an accelerated pace in recent years. The emphasis has mainly been on making robots more efficient, safer and react faster to unpredictable situations. As a result we are witnessing more and more service robots introduced in our everyday lives, especially in public places such as museums, shopping malls and airports. While a mobile service robot moves in a human environment, it leaves an innate effect on people about its demeanor. We do not see them as mere machines but as social agents and expect them to behave humanly by following societal norms and rules. This has created new challenges and opened new research avenues for designing robot control algorithms that deliver human-acceptable, legible and proactive robot behaviors. This thesis proposes a optimization-based cooperative method for trajectoryplanning and navigation with in-built social constraints for keeping robot motions safe, human-aware and predictable. The robot trajectory is dynamically and continuously adjusted to satisfy these social constraints. To do so, we treat the robot trajectory as an elastic band (a mathematical construct representing the robot path as a series of poses and time-difference between those poses) which can be deformed (both in space and time) by the optimization process to respect given constraints. Moreover, we also predict plausible human trajectories in the same operating area by treating human paths also as elastic bands. This scheme allows us to optimize the robot trajectories not only for the current moment but for the entire interaction that happens when humans and robot cross each other's paths. We carried out a set of experiments with canonical human-robot interactive situations that happen in our everyday lives such as crossing a hallway, passing through a door and intersecting paths on wide open spaces. The proposed cooperative planning method compares favorably against other stat-of-the-art human-aware navigation planning schemes. We have augmented robot navigation behavior with synchronized and responsive movements of the robot head, making the robot look where it is going and occasionally diverting its gaze towards nearby people to acknowledge that robot will avoid any possible collision with them. At any given moment the robot weighs multiple criteria according to the social context and decides where it should turn its gaze. Through an online user study we have shown that such gazing mechanism effectively complements the navigation behavior and it improves legibility of the robot actions. Finally, we have integrated our navigation scheme with a broader supervision system which can jointly generate normative robot behaviors such as approaching a person and adapting the robot speed according to a group of people who the robot guides in airports or museums

    Humanoid robot control of complex postural tasks based on learning from demostration

    Get PDF
    Mención Internacional en el título de doctorThis thesis addresses the problem of planning and controlling complex tasks in a humanoid robot from a postural point of view. It is motivated by the growth of robotics in our current society, where simple robots are being integrated. Its objective is to make an advancement in the development of complex behaviors in humanoid robots, in order to allow them to share our environment in the future. The work presents different contributions in the areas of humanoid robot postural control, behavior planning, non-linear control, learning from demonstration and reinforcement learning. First, as an introduction of the thesis, a group of methods and mathematical formulations are presented, describing concepts such as humanoid robot modelling, generation of locomotion trajectories and generation of whole-body trajectories. Next, the process of human learning is studied in order to develop a novel method of postural task transference between a human and a robot. It uses the demonstrated action goal as a metrics of comparison, which is codified using the reward associated to the task execution. As an evolution of the previous study, this process is generalized to a set of sequential behaviors, which are executed by the robot based on human demonstrations. Afterwards, the execution of postural movements using a robust control approach is proposed. This method allows to control the desired trajectory even with mismatches in the robot model. Finally, an architecture that encompasses all methods of postural planning and control is presented. It is complemented by an environment recognition module that identifies the free space in order to perform path planning and generate safe movements for the robot. The experimental justification of this thesis was developed using the humanoid robot HOAP-3. Tasks such as walking, standing up from a chair, dancing or opening a door have been implemented using the techniques proposed in this work.Esta tesis aborda el problema de la planificación y control de tareas complejas de un robot humanoide desde el punto de vista postural. Viene motivada por el auge de la robótica en la sociedad actual, donde ya se están incorporando robots sencillos y su objetivo es avanzar en el desarrollo de comportamientos complejos en robots humanoides, para que en el futuro sean capaces de compartir nuestro entorno. El trabajo presenta diferentes contribuciones en las áreas de control postural de robots humanoides, planificación de comportamientos, control no lineal, aprendizaje por demostración y aprendizaje por refuerzo. En primer lugar se desarrollan un conjunto de métodos y formulaciones matemáticas sobre los que se sustenta la tesis, describiendo conceptos de modelado de robots humanoides, generación de trayectorias de locomoción y generación de trayectorias del cuerpo completo. A continuación se estudia el proceso de aprendizaje humano, para desarrollar un novedoso método de transferencia de una tarea postural de un humano a un robot, usando como métrica de comparación el objetivo de la acción demostrada, que es codificada a través del refuerzo asociado a la ejecución de dicha tarea. Como evolución del trabajo anterior, se generaliza este proceso para la realización de un conjunto de comportamientos secuenciales, que son de nuevo realizados por el robot basándose en las demostraciones de un ser humano. Seguidamente se estudia la ejecución de movimientos posturales utilizando un método de control robusto ante imprecisiones en el modelado del robot. Para analizar, se presenta una arquitectura que aglutina los métodos de planificación y el control postural desarrollados en los capítulos anteriores. Esto se complementa con un módulo de reconocimiento del entorno y extracción del espacio libre para poder planificar y generar movimientos seguros en dicho entorno. La justificación experimental de la tesis se ha desarrollado con el robot humanoide HOAP-3. En este robot se han implementado tareas como caminar, levantarse de una silla, bailar o abrir una puerta. Todo ello haciendo uso de las técnicas propuestas en este trabajo.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Manuel Ángel Armada Rodríguez.- Secretario: Luis Santiago Garrido Bullón.- Vocal: Sylvain Calino
    corecore