1,162 research outputs found
Modular Customizable ROS-Based Framework for Rapid Development of Social Robots
Developing socially competent robots requires tight integration of robotics,
computer vision, speech processing, and web technologies. We present the
Socially-interactive Robot Software platform (SROS), an open-source framework
addressing this need through a modular layered architecture. SROS bridges the
Robot Operating System (ROS) layer for mobility with web and Android interface
layers using standard messaging and APIs. Specialized perceptual and
interactive skills are implemented as ROS services for reusable deployment on
any robot. This facilitates rapid prototyping of collaborative behaviors that
synchronize perception with physical actuation. We experimentally validated
core SROS technologies including computer vision, speech processing, and GPT2
autocomplete speech implemented as plug-and-play ROS services. Modularity is
demonstrated through the successful integration of an additional ROS package,
without changes to hardware or software platforms. The capabilities enabled
confirm SROS's effectiveness in developing socially interactive robots through
synchronized cross-domain interaction. Through demonstrations showing
synchronized multimodal behaviors on an example platform, we illustrate how the
SROS architectural approach addresses shortcomings of previous work by lowering
barriers for researchers to advance the state-of-the-art in adaptive,
collaborative customizable human-robot systems through novel applications
integrating perceptual and social abilities
Project Hermes: The Socially Assistive Tour-Guiding Robot
With the reduced amount of availability of a labor force for non-technical tasks, service robotics has grown to be used in place of human labor to handle these tasks. There have been various studies on the impact of using robotics in a sociological context. The use of service robots in a social and labor environment recognizes the need of cohesive Human-Robot Interaction (HRI). In this senior design project, we delve into the thought process of using a service robot in place of a human for tasks that are normally reserved for humans. These tasks outline design considerations when performing emotional-centric activities and the need to deliver an effective and efficient service. Codenamed as Project Hermes, we developed a guided tour robot that will provide an interactive routine. Using the robot’s array of sensors and motors, the routine consists of navigating from one room to another, providing an audible explanation of each room, answering visitor questions, and moving on. With the robot’s embedded microphones, the robot is capable of limited interactions with humans, providing feedback and performing tasks accordingly. Once the core functionalities are developed, Hermes will be evaluated in a real-world environment to garner data and feedback. With all these considerations in hand, the design of the service robot needs to cover many of these areas for our framework. To address this need, we outline the ideas and considerations for the task
The Penetration of Internet of Things in Robotics: Towards a Web of Robotic Things
As the Internet of Things (IoT) penetrates different domains and application
areas, it has recently entered also the world of robotics. Robotics constitutes
a modern and fast-evolving technology, increasingly being used in industrial,
commercial and domestic settings. IoT, together with the Web of Things (WoT)
could provide many benefits to robotic systems. Some of the benefits of IoT in
robotics have been discussed in related work. This paper moves one step
further, studying the actual current use of IoT in robotics, through various
real-world examples encountered through a bibliographic research. The paper
also examines the potential ofWoT, together with robotic systems, investigating
which concepts, characteristics, architectures, hardware, software and
communication methods of IoT are used in existing robotic systems, which
sensors and actions are incorporated in IoT-based robots, as well as in which
application areas. Finally, the current application of WoT in robotics is
examined and discussed
Human Robot Collaborative Assembly Planning: An Answer Set Programming Approach
For planning an assembly of a product from a given set of parts, robots
necessitate certain cognitive skills: high-level planning is needed to decide
the order of actuation actions, while geometric reasoning is needed to check
the feasibility of these actions. For collaborative assembly tasks with humans,
robots require further cognitive capabilities, such as commonsense reasoning,
sensing, and communication skills, not only to cope with the uncertainty caused
by incomplete knowledge about the humans' behaviors but also to ensure safer
collaborations. We propose a novel method for collaborative assembly planning
under uncertainty, that utilizes hybrid conditional planning extended with
commonsense reasoning and a rich set of communication actions for collaborative
tasks. Our method is based on answer set programming. We show the applicability
of our approach in a real-world assembly domain, where a bi-manual Baxter robot
collaborates with a human teammate to assemble furniture. This manuscript is
under consideration for acceptance in TPLP.Comment: 36th International Conference on Logic Programming (ICLP 2020),
University Of Calabria, Rende (CS), Italy, September 2020, 15 page
Designing Human-Centered Collective Intelligence
Human-Centered Collective Intelligence (HCCI) is an emergent research area that seeks to bring together major research areas like machine learning, statistical modeling, information retrieval, market research, and software engineering to address challenges pertaining to deriving intelligent insights and solutions through the collaboration of several intelligent sensors, devices and data sources. An archetypal contextual CI scenario might be concerned with deriving affect-driven intelligence through multimodal emotion detection sources in a bid to determine the likability of one movie trailer over another. On the other hand, the key tenets to designing robust and evolutionary software and infrastructure architecture models to address cross-cutting quality concerns is of keen interest in the “Cloud” age of today. Some of the key quality concerns of interest in CI scenarios span the gamut of security and privacy, scalability, performance, fault-tolerance, and reliability. I present recent advances in CI system design with a focus on highlighting optimal solutions for the aforementioned cross-cutting concerns. I also describe a number of design challenges and a framework that I have determined to be critical to designing CI systems. With inspiration from machine learning, computational advertising, ubiquitous computing, and sociable robotics, this literature incorporates theories and concepts from various viewpoints to empower the collective intelligence engine, ZOEI, to discover affective state and emotional intent across multiple mediums. The discerned affective state is used in recommender systems among others to support content personalization. I dive into the design of optimal architectures that allow humans and intelligent systems to work collectively to solve complex problems. I present an evaluation of various studies that leverage the ZOEI framework to design collective intelligence
WebAL Comes of Age: A review of the first 21 years of Artificial Life on the Web
We present a survey of the first 21 years of web-based artificial life (WebAL) research and applications, broadly construed to include the many different ways in which artificial life and web technologies might intersect. Our survey covers the period from 1994—when the first WebAL work appeared—up to the present day, together with a brief discussion of relevant precursors. We examine recent projects, from 2010–2015, in greater detail in order to highlight the current state of the art. We follow the survey with a discussion of common themes and methodologies that can be observed in recent work and identify a number of likely directions for future work in this exciting area
A Survey on Human-aware Robot Navigation
Intelligent systems are increasingly part of our everyday lives and have been
integrated seamlessly to the point where it is difficult to imagine a world
without them. Physical manifestations of those systems on the other hand, in
the form of embodied agents or robots, have so far been used only for specific
applications and are often limited to functional roles (e.g. in the industry,
entertainment and military fields). Given the current growth and innovation in
the research communities concerned with the topics of robot navigation,
human-robot-interaction and human activity recognition, it seems like this
might soon change. Robots are increasingly easy to obtain and use and the
acceptance of them in general is growing. However, the design of a socially
compliant robot that can function as a companion needs to take various areas of
research into account. This paper is concerned with the navigation aspect of a
socially-compliant robot and provides a survey of existing solutions for the
relevant areas of research as well as an outlook on possible future directions.Comment: Robotics and Autonomous Systems, 202
Neural Network Based Reinforcement Learning for Audio-Visual Gaze Control in Human-Robot Interaction
This paper introduces a novel neural network-based reinforcement learning
approach for robot gaze control. Our approach enables a robot to learn and to
adapt its gaze control strategy for human-robot interaction neither with the
use of external sensors nor with human supervision. The robot learns to focus
its attention onto groups of people from its own audio-visual experiences,
independently of the number of people, of their positions and of their physical
appearances. In particular, we use a recurrent neural network architecture in
combination with Q-learning to find an optimal action-selection policy; we
pre-train the network using a simulated environment that mimics realistic
scenarios that involve speaking/silent participants, thus avoiding the need of
tedious sessions of a robot interacting with people. Our experimental
evaluation suggests that the proposed method is robust against parameter
estimation, i.e. the parameter values yielded by the method do not have a
decisive impact on the performance. The best results are obtained when both
audio and visual information is jointly used. Experiments with the Nao robot
indicate that our framework is a step forward towards the autonomous learning
of socially acceptable gaze behavior.Comment: Paper submitted to Pattern Recognition Letter
- …