197 research outputs found

    Visual Guided Approach-to-Grasp for Humanoid Robots

    Get PDF
    Vision based control for robots has been an active area of research for more than 30 years and significant progresses in the theory and application have been reported (Hutchinson et al., 1996; Kragic & Christensen, 2002; Chaumette & Hutchinson, 2006). Vision is a very important non-contact measurement method for robots. Especially in the field of humanoi

    A framework for compliant physical interaction : the grasp meets the task

    Get PDF
    Although the grasp-task interplay in our daily life is unquestionable, very little research has addressed this problem in robotics. In order to fill the gap between the grasp and the task, we adopt the most successful approaches to grasp and task specification, and extend them with additional elements that allow to define a grasp-task link. We propose a global sensor-based framework for the specification and robust control of physical interaction tasks, where the grasp and the task are jointly considered on the basis of the task frame formalism and the knowledge-based approach to grasping. A physical interaction task planner is also presented, based on the new concept of task-oriented hand pre-shapes. The planner focuses on manipulation of articulated parts in home environments, and is able to specify automatically all the elements of a physical interaction task required by the proposed framework. Finally, several applications are described, showing the versatility of the proposed approach, and its suitability for the fast implementation of robust physical interaction tasks in very different robotic systems

    What Can I Do Around Here? Deep Functional Scene Understanding for Cognitive Robots

    Full text link
    For robots that have the capability to interact with the physical environment through their end effectors, understanding the surrounding scenes is not merely a task of image classification or object recognition. To perform actual tasks, it is critical for the robot to have a functional understanding of the visual scene. Here, we address the problem of localizing and recognition of functional areas from an arbitrary indoor scene, formulated as a two-stage deep learning based detection pipeline. A new scene functionality testing-bed, which is complied from two publicly available indoor scene datasets, is used for evaluation. Our method is evaluated quantitatively on the new dataset, demonstrating the ability to perform efficient recognition of functional areas from arbitrary indoor scenes. We also demonstrate that our detection model can be generalized onto novel indoor scenes by cross validating it with the images from two different datasets

    Real-time 2D–3D door detection and state classification on a low-power device

    Get PDF
    In this paper, we propose three methods for door state classifcation with the goal to improve robot navigation in indoor spaces. These methods were also developed to be used in other areas and applications since they are not limited to door detection as other related works are. Our methods work ofine, in low-powered computers as the Jetson Nano, in real-time with the ability to diferentiate between open, closed and semi-open doors. We use the 3D object classifcation, PointNet, real-time semantic segmentation algorithms such as, FastFCN, FC-HarDNet, SegNet and BiSeNet, the object detection algorithm, DetectNet and 2D object classifcation networks, AlexNet and GoogleNet. We built a 3D and RGB door dataset with images from several indoor environments using a 3D Realsense camera D435. This dataset is freely available online. All methods are analysed taking into account their accuracy and the speed of the algorithm in a low powered computer. We conclude that it is possible to have a door classifcation algorithm running in real-time on a low-power device.info:eu-repo/semantics/publishedVersio

    Humanoid robot control of complex postural tasks based on learning from demostration

    Get PDF
    Mención Internacional en el título de doctorThis thesis addresses the problem of planning and controlling complex tasks in a humanoid robot from a postural point of view. It is motivated by the growth of robotics in our current society, where simple robots are being integrated. Its objective is to make an advancement in the development of complex behaviors in humanoid robots, in order to allow them to share our environment in the future. The work presents different contributions in the areas of humanoid robot postural control, behavior planning, non-linear control, learning from demonstration and reinforcement learning. First, as an introduction of the thesis, a group of methods and mathematical formulations are presented, describing concepts such as humanoid robot modelling, generation of locomotion trajectories and generation of whole-body trajectories. Next, the process of human learning is studied in order to develop a novel method of postural task transference between a human and a robot. It uses the demonstrated action goal as a metrics of comparison, which is codified using the reward associated to the task execution. As an evolution of the previous study, this process is generalized to a set of sequential behaviors, which are executed by the robot based on human demonstrations. Afterwards, the execution of postural movements using a robust control approach is proposed. This method allows to control the desired trajectory even with mismatches in the robot model. Finally, an architecture that encompasses all methods of postural planning and control is presented. It is complemented by an environment recognition module that identifies the free space in order to perform path planning and generate safe movements for the robot. The experimental justification of this thesis was developed using the humanoid robot HOAP-3. Tasks such as walking, standing up from a chair, dancing or opening a door have been implemented using the techniques proposed in this work.Esta tesis aborda el problema de la planificación y control de tareas complejas de un robot humanoide desde el punto de vista postural. Viene motivada por el auge de la robótica en la sociedad actual, donde ya se están incorporando robots sencillos y su objetivo es avanzar en el desarrollo de comportamientos complejos en robots humanoides, para que en el futuro sean capaces de compartir nuestro entorno. El trabajo presenta diferentes contribuciones en las áreas de control postural de robots humanoides, planificación de comportamientos, control no lineal, aprendizaje por demostración y aprendizaje por refuerzo. En primer lugar se desarrollan un conjunto de métodos y formulaciones matemáticas sobre los que se sustenta la tesis, describiendo conceptos de modelado de robots humanoides, generación de trayectorias de locomoción y generación de trayectorias del cuerpo completo. A continuación se estudia el proceso de aprendizaje humano, para desarrollar un novedoso método de transferencia de una tarea postural de un humano a un robot, usando como métrica de comparación el objetivo de la acción demostrada, que es codificada a través del refuerzo asociado a la ejecución de dicha tarea. Como evolución del trabajo anterior, se generaliza este proceso para la realización de un conjunto de comportamientos secuenciales, que son de nuevo realizados por el robot basándose en las demostraciones de un ser humano. Seguidamente se estudia la ejecución de movimientos posturales utilizando un método de control robusto ante imprecisiones en el modelado del robot. Para analizar, se presenta una arquitectura que aglutina los métodos de planificación y el control postural desarrollados en los capítulos anteriores. Esto se complementa con un módulo de reconocimiento del entorno y extracción del espacio libre para poder planificar y generar movimientos seguros en dicho entorno. La justificación experimental de la tesis se ha desarrollado con el robot humanoide HOAP-3. En este robot se han implementado tareas como caminar, levantarse de una silla, bailar o abrir una puerta. Todo ello haciendo uso de las técnicas propuestas en este trabajo.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Manuel Ángel Armada Rodríguez.- Secretario: Luis Santiago Garrido Bullón.- Vocal: Sylvain Calino
    corecore