548 research outputs found

    Determining robot actions for tasks requiring sensor interaction

    Get PDF
    The performance of non-trivial tasks by a mobile robot has been a long term objective of robotic research. One of the major stumbling blocks to this goal is the conversion of the high-level planning goals and commands into the actuator and sensor processing controls. In order for a mobile robot to accomplish a non-trivial task, the task must be described in terms of primitive actions of the robot's actuators. Most non-trivial tasks require the robot to interact with its environment; thus necessitating coordination of sensor processing and actuator control to accomplish the task. The main contention is that the transformation from the high level description of the task to the primitive actions should be performed primarily at execution time, when knowledge about the environment can be obtained through sensors. It is proposed to produce the detailed plan of primitive actions by using a collection of low-level planning components that contain domain specific knowledge and knowledge about the available sensors, actuators, and sensor/actuator processing. This collection will perform signal and control processing as well as serve as a control interface between an actual mobile robot and a high-level planning system. Previous research has shown the usefulness of high-level planning systems to plan the coordination of activities such to achieve a goal, but none have been fully applied to actual mobile robots due to the complexity of interacting with sensors and actuators. This control interface is currently being implemented on a LABMATE mobile robot connected to a SUN workstation and will be developed such to enable the LABMATE to perform non-trivial, sensor-intensive tasks as specified by a planning system

    Challenging the Computational Metaphor: Implications for How We Think

    Get PDF
    This paper explores the role of the traditional computational metaphor in our thinking as computer scientists, its influence on epistemological styles, and its implications for our understanding of cognition. It proposes to replace the conventional metaphor--a sequence of steps--with the notion of a community of interacting entities, and examines the ramifications of such a shift on these various ways in which we think

    Planning in subsumption architectures

    Get PDF
    A subsumption planner using a parallel distributed computational paradigm based on the subsumption architecture for control of real-world capable robots is described. Virtual sensor state space is used as a planning tool to visualize the robot's anticipated effect on its environment. Decision sequences are generated based on the environmental situation expected at the time the robot must commit to a decision. Between decision points, the robot performs in a preprogrammed manner. A rudimentary, domain-specific partial world model contains enough information to extrapolate the end results of the rote behavior between decision points. A collective network of predictors operates in parallel with the reactive network forming a recurrrent network which generates plans as a hierarchy. Details of a plan segment are generated only when its execution is imminent. The use of the subsumption planner is demonstrated by a simple maze navigation problem

    Distributed Lazy Q-learning for Cooperative Mobile Robots

    No full text
    International audienceCompared to single robot learning, cooperative learning adds the challenge of a much larger search space (combined individual search spaces), awareness of other team members, and also the synthesis of the individual behaviors with respect to the task given to the group. Over the years, reinforcement learning has emerged as the main learning approach in autonomous robotics, and lazy learning has become the leading bias, allowing the reduction of the time required by an experiment to the time needed to test the learned behavior performance. These two approaches have been combined together in what is now called lazy Q-learning, a very efficient single robot learning paradigm. We propose a derivation of this learning to team of robots : the «pessimistic» algorithm able to compute for each team member a lower bound of the utility of executing an action in a given situation. We use the cooperative multi-robot observation of multiple moving targets (CMOMMT) application as an illustrative example, and study the efficiency of the Pessimistic Algorithm in its task of inducing learning of cooperation

    Synthesis of formation control for an aquatic swarm robotics system

    Get PDF
    Formations are the spatial organization of objects or entities according to some predefined pattern. They can be found in nature, in social animals such as fish schools, and insect colonies, where the spontaneous organization into emergent structures takes place. Formations have a multitude of applications such as in military and law enforcement scenarios, where they are used to increase operational performance. The concept is even present in collective sports modalities such as football, which use formations as a strategy to increase teams efficiency. Swarm robotics is an approach for the study of multi-robot systems composed of a large number of simple units, inspired in self-organization in animal societies. These have the potential to conduct tasks too demanding for a single robot operating alone. When applied to the coordination of such type of systems, formations allow for a coordinated motion and enable SRS to increase their sensing efficiency as a whole. In this dissertation, we present a virtual structure formation control synthesis for a multi-robot system. Control is synthesized through the use of evolutionary robotics, from where the desired collective behavior emerges, while displaying key-features such as fault tolerance and robustness. Initial experiments on formation control synthesis were conducted in simulation environment. We later developed an inexpensive aquatic robotic platform in order to conduct experiments in real world conditions. Our results demonstrated that it is possible to synthesize formation control for a multi-robot system making use of evolutionary robotics. The developed robotic platform was used in several scientific studies.As formações consistem na organização de objetos ou entidades de acordo com um padrão pré-definido. Elas podem ser encontradas na natureza, em animais sociais tais como peixes ou colónias de insetos, onde a organização espontânea em estruturas se verifica. As formações aplicam-se em diversos contextos, tais como cenários militares ou de aplicação da lei, onde são utilizadas para aumentar a performance operacional. O conceito está também presente em desportos coletivos tais como o futebol, onde as formações são utilizadas como estratégia para aumentar a eficiência das equipas. Os enxames de robots são uma abordagem para o estudo de sistemas multi-robô compostos de um grande número de unidades simples, inspirado na organização de sociedades animais. Estes têm um elevado potencial na resolução de tarefas demasiado complexas para um único robot. Quando aplicadas na coordenação deste tipo de sistemas, as formações permitem o movimento coordenado e o aumento da sensibilidade do enxame como um todo. Nesta dissertação apresentamos a síntese de controlo de formação para um sistema multi-robô. O controlo é sintetizado através do uso de robótica evolucionária, de onde o comportamento coletivo emerge, demonstrando ainda funcionalidadeschave tais como tolerância a falhas e robustez. As experiências iniciais na síntese de controlo foram realizadas em simulação. Mais tarde foi desenvolvida uma plataforma robótica para a condução de experiências no mundo real. Os nossos resultados demonstram que é possível sintetizar controlo de formação para um sistema multi-robô, utilizando técnicas de robótica evolucionária. A plataforma desenvolvida foi ainda utilizada em diversos estudos científicos

    A Review on Current and Potential Applications of Robotics In Mental Health Care

    Get PDF
    Robotics technology is most commonly associated with robots, that are physically embodied systems capable of causing physical change in the world. Robots execute this transformation via effectors that either move the robot itself (locomotion) or move items in the environment (manipulation), and they frequently make judgments based on data from sensors. Robot autonomy can range from totally teleoperated to fully autonomous (the robot is entirely independent). The word robotics technology also encompasses related technologies, such as sensor systems, data processing algorithms, and so forth.  While in recent years this has evolved outward, with an emphasis on difficulties connected to dealing with actual people in the real world. This transition has been referred to as human-centered robotics in the literature, and a developing topic in the last decade focused on difficulties in this arena is known as human robot interaction (HRI). The application of robotics technology in mental health treatment is still in its early stages, but it offers a potentially beneficial tool in the professional's arsenal

    Language to Rewards for Robotic Skill Synthesis

    Full text link
    Large language models (LLMs) have demonstrated exciting progress in acquiring diverse new capabilities through in-context learning, ranging from logical reasoning to code-writing. Robotics researchers have also explored using LLMs to advance the capabilities of robotic control. However, since low-level robot actions are hardware-dependent and underrepresented in LLM training corpora, existing efforts in applying LLMs to robotics have largely treated LLMs as semantic planners or relied on human-engineered control primitives to interface with the robot. On the other hand, reward functions are shown to be flexible representations that can be optimized for control policies to achieve diverse tasks, while their semantic richness makes them suitable to be specified by LLMs. In this work, we introduce a new paradigm that harnesses this realization by utilizing LLMs to define reward parameters that can be optimized and accomplish variety of robotic tasks. Using reward as the intermediate interface generated by LLMs, we can effectively bridge the gap between high-level language instructions or corrections to low-level robot actions. Meanwhile, combining this with a real-time optimizer, MuJoCo MPC, empowers an interactive behavior creation experience where users can immediately observe the results and provide feedback to the system. To systematically evaluate the performance of our proposed method, we designed a total of 17 tasks for a simulated quadruped robot and a dexterous manipulator robot. We demonstrate that our proposed method reliably tackles 90% of the designed tasks, while a baseline using primitive skills as the interface with Code-as-policies achieves 50% of the tasks. We further validated our method on a real robot arm where complex manipulation skills such as non-prehensile pushing emerge through our interactive system.Comment: https://language-to-reward.github.io

    06251 Abstracts Collection -- Multi-Robot Systems: Perception, Behaviors, Learning, and Action

    Get PDF
    From 19.06.06 to 23.06.06, the Dagstuhl Seminar 06251 ``Multi-Robot Systems: Perception, Behaviors, Learning, and Action\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available
    corecore