1,690 research outputs found

    Software tools for the cognitive development of autonomous robots

    Get PDF
    Robotic systems are evolving towards higher degrees of autonomy. This paper reviews the cognitive tools available nowadays for the fulfilment of abstract or long-term goals as well as for learning and modifying their behaviour.Peer ReviewedPostprint (author's final draft

    Recolha e conceptualização de experiências de atividades robóticas baseadas em planos para melhoria de competências no longo prazo

    Get PDF
    Robot learning is a prominent research direction in intelligent robotics. Robotics involves dealing with the issue of integration of multiple technologies, such as sensing, planning, acting, and learning. In robot learning, the long term goal is to develop robots that learn to perform tasks and continuously improve their knowledge and skills through observation and exploration of the environment and interaction with users. While significant research has been performed in the area of learning motor behavior primitives, the topic of learning high-level representations of activities and classes of activities that, decompose into sequences of actions, has not been sufficiently addressed. Learning at the task level is key to increase the robots’ autonomy and flexibility. High-level task knowledge is essential for intelligent robotics since it makes robot programs less dependent on the platform and eases knowledge exchange between robots with different kinematics. The goal of this thesis is to contribute to the development of cognitive robotic capabilities, including supervised experience acquisition through human-robot interaction, high-level task learning from the acquired experiences, and task planning using the acquired task knowledge. A framework containing the required cognitive functions for learning and reproduction of high-level aspects of experiences is proposed. In particular, we propose and formalize the notion of Experience-Based Planning Domains (EBPDs) for long-term learning and planning. A human-robot interaction interface is used to provide a robot with step-by-step instructions on how to perform tasks. Approaches to recording plan-based robot activity experiences including relevant perceptions of the environment and actions taken by the robot are presented. A conceptualization methodology is presented for acquiring task knowledge in the form of activity schemata from experiences. The conceptualization approach is a combination of different techniques including deductive generalization, different forms of abstraction and feature extraction. Conceptualization includes loop detection, scope inference and goal inference. Problem solving in EBPDs is achieved using a two-layer problem solver comprising an abstract planner, to derive an abstract solution for a given task problem by applying a learned activity schema, and a concrete planner, to refine the abstract solution towards a concrete solution. The architecture and the learning and planning methods are applied and evaluated in several real and simulated world scenarios. Finally, the developed learning methods are compared, and conditions where each of them has better applicability are discussed.Aprendizagem de robôs é uma direção de pesquisa proeminente em robótica inteligente. Em robótica, é necessário lidar com a questão da integração de várias tecnologias, como percepção, planeamento, atuação e aprendizagem. Na aprendizagem de robôs, o objetivo a longo prazo é desenvolver robôs que aprendem a executar tarefas e melhoram continuamente os seus conhecimentos e habilidades através da observação e exploração do ambiente e interação com os utilizadores. A investigação tem-se centrado na aprendizagem de comportamentos básicos, ao passo que a aprendizagem de representações de atividades de alto nível, que se decompõem em sequências de ações, e de classes de actividades, não tem sido suficientemente abordada. A aprendizagem ao nível da tarefa é fundamental para aumentar a autonomia e a flexibilidade dos robôs. O conhecimento de alto nível permite tornar o software dos robôs menos dependente da plataforma e facilita a troca de conhecimento entre robôs diferentes. O objetivo desta tese é contribuir para o desenvolvimento de capacidades cognitivas para robôs, incluindo aquisição supervisionada de experiência através da interação humano-robô, aprendizagem de tarefas de alto nível com base nas experiências acumuladas e planeamento de tarefas usando o conhecimento adquirido. Propõe-se uma abordagem que integra diversas funcionalidades cognitivas para aprendizagem e reprodução de aspetos de alto nível detetados nas experiências acumuladas. Em particular, nós propomos e formalizamos a noção de Domínio de Planeamento Baseado na Experiência (Experience-Based Planning Domain, or EBPD) para aprendizagem e planeamento num âmbito temporal alargado. Uma interface para interação humano-robô é usada para fornecer ao robô instruções passo-a-passo sobre como realizar tarefas. Propõe-se uma abordagem para extrair experiências de atividades baseadas em planos, incluindo as percepções relevantes e as ações executadas pelo robô. Uma metodologia de conceitualização é apresentada para a aquisição de conhecimento de tarefa na forma de schemata a partir de experiências. São utilizadas diferentes técnicas, incluindo generalização dedutiva, diferentes formas de abstracção e extração de características. A metodologia inclui detecção de ciclos, inferência de âmbito de aplicação e inferência de objetivos. A resolução de problemas em EBPDs é alcançada usando um sistema de planeamento com duas camadas, uma para planeamento abstrato, aplicando um schema aprendido, e outra para planeamento detalhado. A arquitetura e os métodos de aprendizagem e planeamento são aplicados e avaliados em vários cenários reais e simulados. Finalmente, os métodos de aprendizagem desenvolvidos são comparados e as condições onde cada um deles tem melhor aplicabilidade são discutidos.Programa Doutoral em Informátic

    Technology assessment of advanced automation for space missions

    Get PDF
    Six general classes of technology requirements derived during the mission definition phase of the study were identified as having maximum importance and urgency, including autonomous world model based information systems, learning and hypothesis formation, natural language and other man-machine communication, space manufacturing, teleoperators and robot systems, and computer science and technology

    Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language

    Full text link
    Large foundation models can exhibit unique capabilities depending on the domain of data they are trained on. While these domains are generic, they may only barely overlap. For example, visual-language models (VLMs) are trained on Internet-scale image captions, but large language models (LMs) are further trained on Internet-scale text with no images (e.g. from spreadsheets, to SAT questions). As a result, these models store different forms of commonsense knowledge across different domains. In this work, we show that this model diversity is symbiotic, and can be leveraged to build AI systems with structured Socratic dialogue -- in which new multimodal tasks are formulated as a guided language-based exchange between different pre-existing foundation models, without additional finetuning. In the context of egocentric perception, we present a case study of Socratic Models (SMs) that can provide meaningful results for complex tasks such as generating free-form answers to contextual questions about egocentric video, by formulating video Q&A as short story Q&A, i.e. summarizing the video into a short story, then answering questions about it. Additionally, SMs can generate captions for Internet images, and are competitive with state-of-the-art on zero-shot video-to-text retrieval with 42.8 R@1 on MSR-VTT 1k-A. SMs demonstrate how to compose foundation models zero-shot to capture new multimodal functionalities, without domain-specific data collection. Prototypes are available at socraticmodels.github.io.Comment: https://socraticmodels.github.io

    Automation and robotics for the National Space Program

    Get PDF
    The emphasis on automation and robotics in the augmentation of the human centered systems as it concerns the space station is discussed. How automation and robotics can amplify the capabilities of humans is detailed. A detailed developmental program for the space station is outlined

    The Sixth Annual Workshop on Space Operations Applications and Research (SOAR 1992)

    Get PDF
    This document contains papers presented at the Space Operations, Applications, and Research Symposium (SOAR) hosted by the U.S. Air Force (USAF) on 4-6 Aug. 1992 and held at the JSC Gilruth Recreation Center. The symposium was cosponsored by the Air Force Material Command and by NASA/JSC. Key technical areas covered during the symposium were robotic and telepresence, automation and intelligent systems, human factors, life sciences, and space maintenance and servicing. The SOAR differed from most other conferences in that it was concerned with Government-sponsored research and development relevant to aerospace operations. The symposium's proceedings include papers covering various disciplines presented by experts from NASA, the USAF, universities, and industry

    Developmental Bootstrapping of AIs

    Full text link
    Although some current AIs surpass human abilities in closed artificial worlds such as board games, their abilities in the real world are limited. They make strange mistakes and do not notice them. They cannot be instructed easily, fail to use common sense, and lack curiosity. They do not make good collaborators. Mainstream approaches for creating AIs are the traditional manually-constructed symbolic AI approach and generative and deep learning AI approaches including large language models (LLMs). These systems are not well suited for creating robust and trustworthy AIs. Although it is outside of the mainstream, the developmental bootstrapping approach has more potential. In developmental bootstrapping, AIs develop competences like human children do. They start with innate competences. They interact with the environment and learn from their interactions. They incrementally extend their innate competences with self-developed competences. They interact and learn from people and establish perceptual, cognitive, and common grounding. They acquire the competences they need through bootstrapping. However, developmental robotics has not yet produced AIs with robust adult-level competences. Projects have typically stopped at the Toddler Barrier corresponding to human infant development at about two years of age, before their speech is fluent. They also do not bridge the Reading Barrier, to skillfully and skeptically draw on the socially developed information resources that power current LLMs. The next competences in human cognitive development involve intrinsic motivation, imitation learning, imagination, coordination, and communication. This position paper lays out the logic, prospects, gaps, and challenges for extending the practice of developmental bootstrapping to acquire further competences and create robust, resilient, and human-compatible AIs.Comment: 102 pages, 29 figure

    Artificial general intelligence: Proceedings of the Second Conference on Artificial General Intelligence, AGI 2009, Arlington, Virginia, USA, March 6-9, 2009

    Get PDF
    Artificial General Intelligence (AGI) research focuses on the original and ultimate goal of AI – to create broad human-like and transhuman intelligence, by exploring all available paths, including theoretical and experimental computer science, cognitive science, neuroscience, and innovative interdisciplinary methodologies. Due to the difficulty of this task, for the last few decades the majority of AI researchers have focused on what has been called narrow AI – the production of AI systems displaying intelligence regarding specific, highly constrained tasks. In recent years, however, more and more researchers have recognized the necessity – and feasibility – of returning to the original goals of the field. Increasingly, there is a call for a transition back to confronting the more difficult issues of human level intelligence and more broadly artificial general intelligence
    corecore