49,498 research outputs found
Expressivity in Natural and Artificial Systems
Roboticists are trying to replicate animal behavior in artificial systems.
Yet, quantitative bounds on capacity of a moving platform (natural or
artificial) to express information in the environment are not known. This paper
presents a measure for the capacity of motion complexity -- the expressivity --
of articulated platforms (both natural and artificial) and shows that this
measure is stagnant and unexpectedly limited in extant robotic systems. This
analysis indicates trends in increasing capacity in both internal and external
complexity for natural systems while artificial, robotic systems have increased
significantly in the capacity of computational (internal) states but remained
more or less constant in mechanical (external) state capacity. This work
presents a way to analyze trends in animal behavior and shows that robots are
not capable of the same multi-faceted behavior in rich, dynamic environments as
natural systems.Comment: Rejected from Nature, after review and appeal, July 4, 2018
(submitted May 11, 2018
SANTO: Social Aerial NavigaTion in Outdoors
In recent years, the advances in remote connectivity, miniaturization of electronic components and computing power has led to the integration of these technologies in daily devices like cars or aerial vehicles. From these, a consumer-grade option that has gained popularity are the drones or unmanned aerial vehicles, namely quadrotors. Although until recently they have not been used for commercial applications, their inherent potential for a number of tasks where small and intelligent devices are needed is huge. However, although the integrated hardware has advanced exponentially, the refinement of software used for these applications has not beet yet exploited enough. Recently, this shift is visible in the improvement of common tasks in the field of robotics, such as object tracking or autonomous navigation. Moreover, these challenges can become bigger when taking into account the dynamic nature of the real world, where the insight about the current environment is constantly changing. These settings are considered in the improvement of robot-human interaction, where the potential use of these devices is clear, and algorithms are being developed to improve this situation. By the use of the latest advances in artificial intelligence, the human brain behavior is simulated by the so-called neural networks, in such a way that computing system performs as similar as possible as the human behavior. To this end, the system does learn by error which, in an akin way to the human learning, requires a set of previous experiences quite considerable, in order for the algorithm to retain the manners. Applying these technologies to robot-human interaction do narrow the gap. Even so, from a bird's eye, a noticeable time slot used for the application of these technologies is required for the curation of a high-quality dataset, in order to ensure that the learning process is optimal and no wrong actions are retained. Therefore, it is essential to have a development platform in place to ensure these principles are enforced throughout the whole process of creation and optimization of the algorithm. In this work, multiple already-existing handicaps found in pipelines of this computational gauge are exposed, approaching each of them in a independent and simple manner, in such a way that the solutions proposed can be leveraged by the maximum number of workflows. On one side, this project concentrates on reducing the number of bugs introduced by flawed data, as to help the researchers to focus on developing more sophisticated models. On the other side, the shortage of integrated development systems for this kind of pipelines is envisaged, and with special care those using simulated or controlled environments, with the goal of easing the continuous iteration of these pipelines.Thanks to the increasing popularity of drones, the research and development of autonomous capibilities has become easier. However, due to the challenge of integrating multiple technologies, the available software stack to engage this task is restricted. In this thesis, we accent the divergencies among unmanned-aerial-vehicle simulators and propose a platform to allow faster and in-depth prototyping of machine learning algorithms for this drones
Choreographic and Somatic Approaches for the Development of Expressive Robotic Systems
As robotic systems are moved out of factory work cells into human-facing
environments questions of choreography become central to their design,
placement, and application. With a human viewer or counterpart present, a
system will automatically be interpreted within context, style of movement, and
form factor by human beings as animate elements of their environment. The
interpretation by this human counterpart is critical to the success of the
system's integration: knobs on the system need to make sense to a human
counterpart; an artificial agent should have a way of notifying a human
counterpart of a change in system state, possibly through motion profiles; and
the motion of a human counterpart may have important contextual clues for task
completion. Thus, professional choreographers, dance practitioners, and
movement analysts are critical to research in robotics. They have design
methods for movement that align with human audience perception, can identify
simplified features of movement for human-robot interaction goals, and have
detailed knowledge of the capacity of human movement. This article provides
approaches employed by one research lab, specific impacts on technical and
artistic projects within, and principles that may guide future such work. The
background section reports on choreography, somatic perspectives,
improvisation, the Laban/Bartenieff Movement System, and robotics. From this
context methods including embodied exercises, writing prompts, and community
building activities have been developed to facilitate interdisciplinary
research. The results of this work is presented as an overview of a smattering
of projects in areas like high-level motion planning, software development for
rapid prototyping of movement, artistic output, and user studies that help
understand how people interpret movement. Finally, guiding principles for other
groups to adopt are posited.Comment: Under review at MDPI Arts Special Issue "The Machine as Artist (for
the 21st Century)"
http://www.mdpi.com/journal/arts/special_issues/Machine_Artis
Enabling a Pepper Robot to provide Automated and Interactive Tours of a Robotics Laboratory
The Pepper robot has become a widely recognised face for the perceived
potential of social robots to enter our homes and businesses. However, to date,
commercial and research applications of the Pepper have been largely restricted
to roles in which the robot is able to remain stationary. This restriction is
the result of a number of technical limitations, including limited sensing
capabilities, and have as a result, reduced the number of roles in which use of
the robot can be explored. In this paper, we present our approach to solving
these problems, with the intention of opening up new research applications for
the robot. To demonstrate the applicability of our approach, we have framed
this work within the context of providing interactive tours of an open-plan
robotics laboratory.Comment: 8 pages, Submitted to IROS 2018 (2018 IEEE/RSJ International
Conference on Intelligent Robots and Systems), see
https://bitbucket.org/pepper_qut/ for access to the softwar
Arena: A General Evaluation Platform and Building Toolkit for Multi-Agent Intelligence
Learning agents that are not only capable of taking tests, but also
innovating is becoming a hot topic in AI. One of the most promising paths
towards this vision is multi-agent learning, where agents act as the
environment for each other, and improving each agent means proposing new
problems for others. However, existing evaluation platforms are either not
compatible with multi-agent settings, or limited to a specific game. That is,
there is not yet a general evaluation platform for research on multi-agent
intelligence. To this end, we introduce Arena, a general evaluation platform
for multi-agent intelligence with 35 games of diverse logics and
representations. Furthermore, multi-agent intelligence is still at the stage
where many problems remain unexplored. Therefore, we provide a building toolkit
for researchers to easily invent and build novel multi-agent problems from the
provided game set based on a GUI-configurable social tree and five basic
multi-agent reward schemes. Finally, we provide Python implementations of five
state-of-the-art deep multi-agent reinforcement learning baselines. Along with
the baseline implementations, we release a set of 100 best agents/teams that we
can train with different training schemes for each game, as the base for
evaluating agents with population performance. As such, the research community
can perform comparisons under a stable and uniform standard. All the
implementations and accompanied tutorials have been open-sourced for the
community at https://sites.google.com/view/arena-unity/
- …