55,389 research outputs found
VISION-BASED URBAN NAVIGATION PROCEDURES FOR VERBALLY INSTRUCTED ROBOTS
The work presented in this thesis is part of a project in instruction based learning (IBL) for mobile
robots were a robot is designed that can be instructed by its users through unconstrained natural
language. The robot uses vision guidance to follow route instructions in a miniature town model.
The aim of the work presented here was to determine the functional vocabulary of the robot in the
form of "primitive procedures". In contrast to previous work in the field of instructable robots this
was done following a "user-centred" approach were the main concern was to create primitive
procedures that can be directly associated with natural language instructions. To achieve this, a corpus
of human-to-human natural language instructions was collected and analysed. A set of primitive
actions was found with which the collected corpus could be represented. These primitive actions were
then implemented as robot-executable procedures.
Natural language instructions are under-specified when destined to be executed by a robot. This is
because instructors omit information that they consider as "commonsense" and rely on the listener's
sensory-motor capabilities to determine the details of the task execution. In this thesis the under-specification
problem is solved by determining the missing information, either during the learning of
new routes or during their execution by the robot. During learning, the missing information is
determined by imitating the commonsense approach human listeners take to achieve the same
purpose. During execution, missing information, such as the location of road layout features
mentioned in route instructions, is determined from the robot's view by using image template
matching. The original contribution of this thesis, in both these methods, lies in the fact that they are
driven by the natural language examples found in the corpus collected for the IDL project.
During the testing phase a high success rate of primitive calls, when these were considered individually,
showed that the under-specification problem has overall been solved. A novel method for testing the
primitive procedures, as part of complete route descriptions, is also proposed in this thesis. This was
done by comparing the performance of human subjects when driving the robot, following route
descriptions, with the performance of the robot when executing the same route descriptions. The
results obtained from this comparison clearly indicated where errors occur from the time when a
human speaker gives a route description to the time when the task is executed by a human listener or
by the robot.
Finally, a software speed controller is proposed in this thesis in order to control the wheel speeds of
the robot used in this project. The controller employs PI (Proportional and Integral) and PID
(Proportional, Integral and Differential) control and provides a good alternative to expensive hardware
Large Language Models for Robotics: A Survey
The human ability to learn, generalize, and control complex manipulation
tasks through multi-modality feedback suggests a unique capability, which we
refer to as dexterity intelligence. Understanding and assessing this
intelligence is a complex task. Amidst the swift progress and extensive
proliferation of large language models (LLMs), their applications in the field
of robotics have garnered increasing attention. LLMs possess the ability to
process and generate natural language, facilitating efficient interaction and
collaboration with robots. Researchers and engineers in the field of robotics
have recognized the immense potential of LLMs in enhancing robot intelligence,
human-robot interaction, and autonomy. Therefore, this comprehensive review
aims to summarize the applications of LLMs in robotics, delving into their
impact and contributions to key areas such as robot control, perception,
decision-making, and path planning. We first provide an overview of the
background and development of LLMs for robotics, followed by a description of
the benefits of LLMs for robotics and recent advancements in robotics models
based on LLMs. We then delve into the various techniques used in the model,
including those employed in perception, decision-making, control, and
interaction. Finally, we explore the applications of LLMs in robotics and some
potential challenges they may face in the near future. Embodied intelligence is
the future of intelligent science, and LLMs-based robotics is one of the
promising but challenging paths to achieve this.Comment: Preprint. 4 figures, 3 table
Teaching robots parametrized executable plans through spoken interaction
While operating in domestic environments, robots will necessarily
face difficulties not envisioned by their developers at programming
time. Moreover, the tasks to be performed by a robot will often
have to be specialized and/or adapted to the needs of specific users
and specific environments. Hence, learning how to operate by interacting
with the user seems a key enabling feature to support the
introduction of robots in everyday environments.
In this paper we contribute a novel approach for learning, through
the interaction with the user, task descriptions that are defined as a
combination of primitive actions. The proposed approach makes
a significant step forward by making task descriptions parametric
with respect to domain specific semantic categories. Moreover, by
mapping the task representation into a task representation language,
we are able to express complex execution paradigms and to revise
the learned tasks in a high-level fashion. The approach is evaluated
in multiple practical applications with a service robot
Spatial language driven robot
This dissertation investigates the methods to enable a robot to interact with human using spatial language. A prototype system of human-robot interaction using spatial language running on an autonomous robot is proposed in the dissertation. The system includes two complementary works. One is to control the robot by human natural spatial language to find the target object to fetch it. Another work is to generate a natural spatial language description to describe a target object in the robot working environment. The first task is called spatial language grounding and the second work is named as spatial language generation. The spatial language grounding and generation are both end-to-end process which means the system will determine the output only by the natural language command from a human during the interaction and the raw perception data collected from the environment. Furniture recognizers are designed for the robot to detect the environment during the tasks. A hierarchy system is designed to translate the human spatial language to the symbolic grounding model and then to the robot actions. To reduce the ambiguity in the interaction, a human demonstration system is designed to collect the spatial concept of the human user for building the robot behavior policies under different grounding models. A language generation system trained by real human spatial language corpus is proposed to automatically edit spatial descriptions of the location of a target object. All the modules in the system are evaluated in the physical environment, and a 3D robot simulator developed on ROS and GAZEBO.Includes biblographical reference
Towards adaptive multi-robot systems: self-organization and self-adaptation
Dieser Beitrag ist mit Zustimmung des Rechteinhabers aufgrund einer (DFG geförderten) Allianz- bzw. Nationallizenz frei zugänglich.This publication is with permission of the rights owner freely accessible due to an Alliance licence and a national licence (funded by the DFG, German Research Foundation) respectively.The development of complex systems ensembles that operate in uncertain environments is a major challenge. The reason for this is that system designers are not able to fully specify the system during specification and development and before it is being deployed. Natural swarm systems enjoy similar characteristics, yet, being self-adaptive and being able to self-organize, these systems show beneficial emergent behaviour. Similar concepts can be extremely helpful for artificial systems, especially when it comes to multi-robot scenarios, which require such solution in order to be applicable to highly uncertain real world application. In this article, we present a comprehensive overview over state-of-the-art solutions in emergent systems, self-organization, self-adaptation, and robotics. We discuss these approaches in the light of a framework for multi-robot systems and identify similarities, differences missing links and open gaps that have to be addressed in order to make this framework possible
Model-driven engineering approach to design and implementation of robot control system
In this paper we apply a model-driven engineering approach to designing
domain-specific solutions for robot control system development. We present a
case study of the complete process, including identification of the domain
meta-model, graphical notation definition and source code generation for
subsumption architecture -- a well-known example of robot control architecture.
Our goal is to show that both the definition of the robot-control architecture
and its supporting tools fits well into the typical workflow of model-driven
engineering development.Comment: Presented at DSLRob 2011 (arXiv:cs/1212.3308
- …