57 research outputs found

    Autonomous Systems, Robotics, and Computing Systems Capability Roadmap: NRC Dialogue

    Get PDF
    Contents include the following: Introduction. Process, Mission Drivers, Deliverables, and Interfaces. Autonomy. Crew-Centered and Remote Operations. Integrated Systems Health Management. Autonomous Vehicle Control. Autonomous Process Control. Robotics. Robotics for Solar System Exploration. Robotics for Lunar and Planetary Habitation. Robotics for In-Space Operations. Computing Systems. Conclusion

    A Goal-Oriented Autonomous Controller for Space Exploration

    Get PDF
    The Goal-Oriented Autonomous Controller (GOAC) is the envisaged result of a multi-institutional effort within the on-going Autonomous Controller R&D activity funded by ESA ESTEC. The objective of this effort is to design, build and test a viable on-board controller to demonstrate key concepts in fully autonomous operations for ESA missions. This three-layer architecture is an integrative effort to bring together four mature technologies; for a functional layer, a verification and validation system, a planning engine and a controller framework for planning and execution which uses the sense-plan-act paradigm for goal oriented autonomy. GOAC as a result will generate plans in situ, deterministically dispatch activities for execution, and recover from off-nominal conditions

    Human Health and Support Systems Capability Roadmap Progress Review

    Get PDF
    The Human Health and Support Systems Capability Roadmap focuses on research and technology development and demonstration required to ensure the health, habitation, safety, and effectiveness of crews in and beyond low Earth orbit. It contains three distinct sub-capabilities: Human Health and Performance. Life Support and Habitats. Extra-Vehicular Activity

    Percepción basada en visión estereoscópica, planificación de trayectorias y estrategias de navegación para exploración robótica autónoma

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería del Software e Inteligencia artificial, leída el 13-05-2015En esta tesis se trata el desarrollo de una estrategia de navegación autónoma basada en visión artificial para exploración robótica autónoma de superficies planetarias. Se han desarrollado una serie de subsistemas, módulos y software específicos para la investigación desarrollada en este trabajo, ya que la mayoría de las herramientas existentes para este dominio son propiedad de agencias espaciales nacionales, no accesibles a la comunidad científica. Se ha diseñado una arquitectura software modular multi-capa con varios niveles jerárquicos para albergar el conjunto de algoritmos que implementan la estrategia de navegación autónoma y garantizar la portabilidad del software, su reutilización e independencia del hardware. Se incluye también el diseño de un entorno de trabajo destinado a dar soporte al desarrollo de las estrategias de navegación. Éste se basa parcialmente en herramientas de código abierto al alcance de cualquier investigador o institución, con las necesarias adaptaciones y extensiones, e incluye capacidades de simulación 3D, modelos de vehículos robóticos, sensores, y entornos operacionales, emulando superficies planetarias como Marte, para el análisis y validación a nivel funcional de las estrategias de navegación desarrolladas. Este entorno también ofrece capacidades de depuración y monitorización.La presente tesis se compone de dos partes principales. En la primera se aborda el diseño y desarrollo de las capacidades de autonomía de alto nivel de un rover, centrándose en la navegación autónoma, con el soporte de las capacidades de simulación y monitorización del entorno de trabajo previo. Se han llevado a cabo un conjunto de experimentos de campo, con un robot y hardware real, detallándose resultados, tiempo de procesamiento de algoritmos, así como el comportamiento y rendimiento del sistema en general. Como resultado, se ha identificado al sistema de percepción como un componente crucial dentro de la estrategia de navegación y, por tanto, el foco principal de potenciales optimizaciones y mejoras del sistema. Como consecuencia, en la segunda parte de este trabajo, se afronta el problema de la correspondencia en imágenes estéreo y reconstrucción 3D de entornos naturales no estructurados. Se han analizado una serie de algoritmos de correspondencia, procesos de imagen y filtros. Generalmente se asume que las intensidades de puntos correspondientes en imágenes del mismo par estéreo es la misma. Sin embargo, se ha comprobado que esta suposición es a menudo falsa, a pesar de que ambas se adquieren con un sistema de visión compuesto de dos cámaras idénticas. En consecuencia, se propone un sistema experto para la corrección automática de intensidades en pares de imágenes estéreo y reconstrucción 3D del entorno basado en procesos de imagen no aplicados hasta ahora en el campo de la visión estéreo. Éstos son el filtrado homomórfico y la correspondencia de histogramas, que han sido diseñados para corregir intensidades coordinadamente, ajustando una imagen en función de la otra. Los resultados se han podido optimizar adicionalmente gracias al diseño de un proceso de agrupación basado en el principio de continuidad espacial para eliminar falsos positivos y correspondencias erróneas. Se han estudiado los efectos de la aplicación de dichos filtros, en etapas previas y posteriores al proceso de correspondencia, con eficiencia verificada favorablemente. Su aplicación ha permitido la obtención de un mayor número de correspondencias válidas en comparación con los resultados obtenidos sin la aplicación de los mismos, consiguiendo mejoras significativas en los mapas de disparidad y, por lo tanto, en los procesos globales de percepción y reconstrucción 3D.Depto. de Ingeniería de Software e Inteligencia Artificial (ISIA)Fac. de InformáticaTRUEunpu

    Common Data Fusion Framework : An open-source Common Data Fusion Framework for space robotics

    Get PDF
    Multisensor data fusion plays a vital role in providing autonomous systems with environmental information crucial for reliable functioning. In this article, we summarize the modular structure of the newly developed and released Common Data Fusion Framework and explain how it is used. Sensor data are registered and fused within the Common Data Fusion Framework to produce comprehensive 3D environment representations and pose estimations. The proposed software components to model this process in a reusable manner are presented through a complete overview of the framework, then the provided data fusion algorithms are listed, and through the case of 3D reconstruction from 2D images, the Common Data Fusion Framework approach is exemplified. The Common Data Fusion Framework has been deployed and tested in various scenarios that include robots performing operations of planetary rover exploration and tracking of orbiting satellites

    Service Oriented Robotic Architecture for Space Robotics: Design, Testing, and Lessons Learned

    Get PDF
    This paper presents the lessons learned from six years of experiments with planetary rover prototypes running the Service Oriented Robotic Architecture (SORA) developed by the Intelligent Robotics Group (IRG) at the NASA Ames Research Center. SORA relies on proven software engineering methods and technologies applied to space robotics. Based on a Service Oriented Architecture and robust middleware, SORA encompasses on-board robot control and a full suite of software tools necessary for remotely operated exploration missions. SORA has been eld tested in numerous scenarios of robotic lunar and planetary exploration. The experiments conducted by IRG with SORA exercise a large set of the constraints encountered in space applications: remote robotic assets, ight relevant science instruments, distributed operations, high network latencies and unreliable or intermittent communication links. In this paper, we present the results of these eld tests in regard to the developed architecture, and discuss its bene ts and limitations

    An Investigation of Different Modeling Techniques for Autonomous Robot Navigation

    Get PDF
    This research aims to give recommendations towards modeling the navigation control architectures for an autonomous rover designed for an unstructured, outdoors environment. These recommendations are equally applicable to other autonomous vehicles, such as aircraft or underwater vehicles. Many successful architectures for this application have been developed, but there is no common terminology for the discussion of robotics architectures and their properties in general. This paper suggests the use of terms borrowed from administrative theory to facilitate interdisciplinary dialog about the tradeoffs of various kinds of models for robotics and similar systems. Past approaches to modeling autonomous robot navigation architectures have broken the architecture up into layers or levels. The upper levels or layers make high-level decisions about how the robot is going to accomplish a task, and the lower levels or layers make low-level decisions. This is analogous to a CEO of a corporation telling the managers how he wants the corporation to work towards its goal. The managers each oversee a part of the corporation. The workers are told what to do, but still make low-level decisions such as how hard to twist a screw, what tool to use to remove a rivet, or to do something other than what they were told in the interest of safety. Traditionally, there have been two or three layers for robot architectures, and every module developed fits into one of these layers. Every branch of the hierarchy has one module in each of the layers. The reasons given for breaking the architecture up into two or three layers vary from implementation to implementation. This paper aims to take a more generalized view. The benefits of the two or three layered approach are well published, including reliability, reusability, and scalability among others. This paper asserts that these layers are unnecessary, and that vertical specialization can be implemented to a different degree on different branches of the hierarchy. For example, the velocity controller on a rover might have two layers, whereas the steering controller on the same rover might have four. They share the highest layer, which is the navigational planner that coordinates them. But the two branches of hierarchy between the navigational planner and the two actuators look very different from one another. This facilitates a decentralization of the decision making duties and greater freedom in the process of breaking the navigation system up into modules

    Toward a Test Environment for Autonomous Controllers

    Get PDF
    In the last two decades, an increasing attention has been dedicated on the use of high level task planning in robotic control, aiming to deploy advanced robotics systems in challenging scenarios where a high autonomy degree is required. Nevertheless, an interesting open problem in the literature is the lack of a well defined methodology for approaching the design of deliberative systems and for fairly comparing different approaches to deliberation. This paper presents the general idea of an environment for facilitating knowledge engineering for autonomy and in particular to facilitate accurate experiments on planning and execution systems for robotics. It discusses features of the On-Ground Autonomy Test Environment (OGATE), a general testbench for interfacing deliberative modules. In particular we present features of an initial instance of such system built to support the GOAC robotic software
    corecore