1,182 research outputs found

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Artificial Intelligence and Systems Theory: Applied to Cooperative Robots

    Full text link
    This paper describes an approach to the design of a population of cooperative robots based on concepts borrowed from Systems Theory and Artificial Intelligence. The research has been developed under the SocRob project, carried out by the Intelligent Systems Laboratory at the Institute for Systems and Robotics - Instituto Superior Tecnico (ISR/IST) in Lisbon. The acronym of the project stands both for "Society of Robots" and "Soccer Robots", the case study where we are testing our population of robots. Designing soccer robots is a very challenging problem, where the robots must act not only to shoot a ball towards the goal, but also to detect and avoid static (walls, stopped robots) and dynamic (moving robots) obstacles. Furthermore, they must cooperate to defeat an opposing team. Our past and current research in soccer robotics includes cooperative sensor fusion for world modeling, object recognition and tracking, robot navigation, multi-robot distributed task planning and coordination, including cooperative reinforcement learning in cooperative and adversarial environments, and behavior-based architectures for real time task execution of cooperating robot teams

    Interfaz de software Autonavi3at para navegar de forma autónoma en vías urbanas mediante visión omnidireccional y un robot móvil

    Get PDF
    The design of efficient autonomous navigation systems for mobile robots or autonomous vehicles is fundamental to perform the programmed tasks. Basically, two kind of sensors are used in urban road following: LIDAR and cameras. LIDAR sensors are highly accurate but expensive and extra work is needed for human understanding of the point cloud scenes; however, visual content is understood better by human beings, which should be used to develop human-robot interfaces. In this work, a computer vision-based urban road following software tool called AutoNavi3AT for mobile robots and autonomous vehicles is presented. The urban road following scheme proposed in AutoNavi3AT uses vanishing point estimation and tracking on panoramic images to control the mobile robot heading on the urban road. To do that, Gabor filters, region growing, and particle filters were used. In addition, laser range data are also employed for local obstacle avoidance. Quantitative results were achieved using two kind of tests, one uses datasets acquired at the Universidad del Valle campus, and field tests using a Pioneer 3AT mobile robot. As a result, important improvements in the vanishing point estimation of 68.26 % and 61.46 % in average were achieved, which is useful for mobile robots and autonomous vehicles when they are moving on urban roads.El diseño de sistemas de navegación autónomos eficientes para robots móviles o vehículos autónomos es fundamental para realizar las tareas programadas. Básicamente, se utilizan dos tipos de sensores en el seguimiento de vías urbanas: LIDAR y cámaras. Los sensores LIDAR son muy precisos, pero costosos y se necesita trabajo adicional para la comprensión humana de las escenas de nubes de puntos; sin embargo, los seres humanos entienden mejor el contenido visual, lo que debería usarse para desarrollar interfaces humano-robot. En este trabajo, se presenta una herramienta de software de seguimiento de carreteras urbanas basada en visión artificial llamada AutoNavi3AT para robots móviles y vehículos autónomos. El esquema de seguimiento de vías urbanas propuesto en AutoNavi3AT utiliza la estimación del punto de fuga y el seguimiento de imágenes panorámicas para controlar el avance del robot móvil en la vía urbana. Para ello se utilizaron filtros Gabor, crecimiento de regiones y filtros de partículas. Además, los datos de alcance del láser también se emplean para evitar obstáculos locales. Los resultados cuantitativos se lograron utilizando dos tipos de pruebas, una utiliza conjuntos de datos adquiridos en el campus de la Universidad del Valle y pruebas de campo utilizando un robot móvil Pioneer 3AT. Como resultado, se lograron mejoras importantes en la estimación del punto de fuga de 68.26% y 61.46% en promedio, lo cual es útil para robots móviles y vehículos autónomos cuando se desplazan por vías urbanas

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Vision-based grasping of unknown objects to improve disabled people autonomy.

    Get PDF
    International audienceThis paper presents our contribution to vision based robotic assistance for people with disabilities. The rehabilitative robotic arms currently available on the market are directly controlled by adaptive devices, which lead to increasing strain on the user's disability. To reduce the need for user's actions, we propose here several vision-based solutions to automatize the grasping of unknown objects. Neither appearance data bases nor object models are considered. All the needed information is computed on line. This paper focuses on the positioning of the camera and the gripper approach. For each of those two steps, two alternative solutions are provided. All the methods have been tested and validated on robotics cells. Some have already been integrated into our mobile robot SAM

    Modeling the environment with egocentric vision systems

    Get PDF
    Cada vez más sistemas autónomos, ya sean robots o sistemas de asistencia, están presentes en nuestro día a día. Este tipo de sistemas interactúan y se relacionan con su entorno y para ello necesitan un modelo de dicho entorno. En función de las tareas que deben realizar, la información o el detalle necesario del modelo varía. Desde detallados modelos 3D para sistemas de navegación autónomos, a modelos semánticos que incluyen información importante para el usuario como el tipo de área o qué objetos están presentes. La creación de estos modelos se realiza a través de las lecturas de los distintos sensores disponibles en el sistema. Actualmente, gracias a su pequeño tamaño, bajo precio y la gran información que son capaces de capturar, las cámaras son sensores incluidos en todos los sistemas autónomos. El objetivo de esta tesis es el desarrollar y estudiar nuevos métodos para la creación de modelos del entorno a distintos niveles semánticos y con distintos niveles de precisión. Dos puntos importantes caracterizan el trabajo desarrollado en esta tesis: - El uso de cámaras con punto de vista egocéntrico o en primera persona ya sea en un robot o en un sistema portado por el usuario (wearable). En este tipo de sistemas, las cámaras son solidarias al sistema móvil sobre el que van montadas. En los últimos años han aparecido muchos sistemas de visión wearables, utilizados para multitud de aplicaciones, desde ocio hasta asistencia de personas. - El uso de sistemas de visión omnidireccional, que se distinguen por su gran campo de visión, incluyendo mucha más información en cada imagen que las cámara convencionales. Sin embargo plantean nuevas dificultades debido a distorsiones y modelos de proyección más complejos. Esta tesis estudia distintos tipos de modelos del entorno: - Modelos métricos: el objetivo de estos modelos es crear representaciones detalladas del entorno en las que localizar con precisión el sistema autónomo. Ésta tesis se centra en la adaptación de estos modelos al uso de visión omnidireccional, lo que permite capturar más información en cada imagen y mejorar los resultados en la localización. - Modelos topológicos: estos modelos estructuran el entorno en nodos conectados por arcos. Esta representación tiene menos precisión que la métrica, sin embargo, presenta un nivel de abstracción mayor y puede modelar el entorno con más riqueza. %, por ejemplo incluyendo el tipo de área de cada nodo, la localización de objetos importantes o el tipo de conexión entre los distintos nodos. Esta tesis se centra en la creación de modelos topológicos con información adicional sobre el tipo de área de cada nodo y conexión (pasillo, habitación, puertas, escaleras...). - Modelos semánticos: este trabajo también contribuye en la creación de nuevos modelos semánticos, más enfocados a la creación de modelos para aplicaciones en las que el sistema interactúa o asiste a una persona. Este tipo de modelos representan el entorno a través de conceptos cercanos a los usados por las personas. En particular, esta tesis desarrolla técnicas para obtener y propagar información semántica del entorno en secuencias de imágen
    corecore