44 research outputs found
Real-time generation and adaptation of social companion robot behaviors
Social robots will be part of our future homes.
They will assist us in everyday tasks, entertain us, and provide helpful advice.
However, the technology still faces challenges that must be overcome to equip the machine with social competencies and make it a socially intelligent and accepted housemate.
An essential skill of every social robot is verbal and non-verbal communication.
In contrast to voice assistants, smartphones, and smart home technology, which are already part of many people's lives today, social robots have an embodiment that raises expectations towards the machine.
Their anthropomorphic or zoomorphic appearance suggests they can communicate naturally with speech, gestures, or facial expressions and understand corresponding human behaviors.
In addition, robots also need to consider individual users' preferences: everybody is shaped by their culture, social norms, and life experiences, resulting in different expectations towards communication with a robot.
However, robots do not have human intuition - they must be equipped with the corresponding algorithmic solutions to these problems.
This thesis investigates the use of reinforcement learning to adapt the robot's verbal and non-verbal communication to the user's needs and preferences.
Such non-functional adaptation of the robot's behaviors primarily aims to improve the user experience and the robot's perceived social intelligence.
The literature has not yet provided a holistic view of the overall challenge: real-time adaptation requires control over the robot's multimodal behavior generation, an understanding of human feedback, and an algorithmic basis for machine learning.
Thus, this thesis develops a conceptual framework for designing real-time non-functional social robot behavior adaptation with reinforcement learning.
It provides a higher-level view from the system designer's perspective and guidance from the start to the end.
It illustrates the process of modeling, simulating, and evaluating such adaptation processes.
Specifically, it guides the integration of human feedback and social signals to equip the machine with social awareness.
The conceptual framework is put into practice for several use cases, resulting in technical proofs of concept and research prototypes.
They are evaluated in the lab and in in-situ studies.
These approaches address typical activities in domestic environments, focussing on the robot's expression of personality, persona, politeness, and humor.
Within this scope, the robot adapts its spoken utterances, prosody, and animations based on human explicit or implicit feedback.Soziale Roboter werden Teil unseres zukĂĽnftigen Zuhauses sein.
Sie werden uns bei alltäglichen Aufgaben unterstützen, uns unterhalten und uns mit hilfreichen Ratschlägen versorgen.
Noch gibt es allerdings technische Herausforderungen, die zunächst überwunden werden müssen, um die Maschine mit sozialen Kompetenzen auszustatten und zu einem sozial intelligenten und akzeptierten Mitbewohner zu machen.
Eine wesentliche Fähigkeit eines jeden sozialen Roboters ist die verbale und nonverbale Kommunikation.
Im Gegensatz zu Sprachassistenten, Smartphones und Smart-Home-Technologien, die bereits heute Teil des Lebens vieler Menschen sind, haben soziale Roboter eine Verkörperung, die Erwartungen an die Maschine weckt.
Ihr anthropomorphes oder zoomorphes Aussehen legt nahe, dass sie in der Lage sind, auf natĂĽrliche Weise mit Sprache, Gestik oder Mimik zu kommunizieren, aber auch entsprechende menschliche Kommunikation zu verstehen.
DarĂĽber hinaus mĂĽssen Roboter auch die individuellen Vorlieben der Benutzer berĂĽcksichtigen.
So ist jeder Mensch von seiner Kultur, sozialen Normen und eigenen Lebenserfahrungen geprägt, was zu unterschiedlichen Erwartungen an die Kommunikation mit einem Roboter führt.
Roboter haben jedoch keine menschliche Intuition - sie mĂĽssen mit entsprechenden Algorithmen fĂĽr diese Probleme ausgestattet werden.
In dieser Arbeit wird der Einsatz von bestärkendem Lernen untersucht, um die verbale und nonverbale Kommunikation des Roboters an die Bedürfnisse und Vorlieben des Benutzers anzupassen.
Eine solche nicht-funktionale Anpassung des Roboterverhaltens zielt in erster Linie darauf ab, das Benutzererlebnis und die wahrgenommene soziale Intelligenz des Roboters zu verbessern.
Die Literatur bietet bisher keine ganzheitliche Sicht auf diese Herausforderung: Echtzeitanpassung erfordert die Kontrolle über die multimodale Verhaltenserzeugung des Roboters, ein Verständnis des menschlichen Feedbacks und eine algorithmische Basis für maschinelles Lernen.
Daher wird in dieser Arbeit ein konzeptioneller Rahmen für die Gestaltung von nicht-funktionaler Anpassung der Kommunikation sozialer Roboter mit bestärkendem Lernen entwickelt.
Er bietet eine ĂĽbergeordnete Sichtweise aus der Perspektive des Systemdesigners und eine Anleitung vom Anfang bis zum Ende.
Er veranschaulicht den Prozess der Modellierung, Simulation und Evaluierung solcher Anpassungsprozesse.
Insbesondere wird auf die Integration von menschlichem Feedback und sozialen Signalen eingegangen, um die Maschine mit sozialem Bewusstsein auszustatten.
Der konzeptionelle Rahmen wird für mehrere Anwendungsfälle in die Praxis umgesetzt, was zu technischen Konzeptnachweisen und Forschungsprototypen führt, die in Labor- und In-situ-Studien evaluiert werden.
Diese Ansätze befassen sich mit typischen Aktivitäten in häuslichen Umgebungen, wobei der Schwerpunkt auf dem Ausdruck der Persönlichkeit, dem Persona, der Höflichkeit und dem Humor des Roboters liegt.
In diesem Rahmen passt der Roboter seine Sprache, Prosodie, und Animationen auf Basis expliziten oder impliziten menschlichen Feedbacks an
Analytics and Intuition in the Process of Selecting Talent
In management, decisions are expected to be based on rational analytics rather than intuition. But intuition, as a human evolutionary achievement, offers wisdom that, despite all the advances in rational analytics and AI, should be used constructively when recruiting and winning personnel. Integrating these inner experiential competencies with rational-analytical procedures leads to smart recruiting decisions
Towards Neural Numeric-To-Text Generation From Temporal Personal Health Data
With an increased interest in the production of personal health technologies
designed to track user data (e.g., nutrient intake, step counts), there is now
more opportunity than ever to surface meaningful behavioral insights to
everyday users in the form of natural language. This knowledge can increase
their behavioral awareness and allow them to take action to meet their health
goals. It can also bridge the gap between the vast collection of personal
health data and the summary generation required to describe an individual's
behavioral tendencies. Previous work has focused on rule-based time-series data
summarization methods designed to generate natural language summaries of
interesting patterns found within temporal personal health data. We examine
recurrent, convolutional, and Transformer-based encoder-decoder models to
automatically generate natural language summaries from numeric temporal
personal health data. We showcase the effectiveness of our models on real user
health data logged in MyFitnessPal and show that we can automatically generate
high-quality natural language summaries. Our work serves as a first step
towards the ambitious goal of automatically generating novel and meaningful
temporal summaries from personal health data.Comment: 5 pages, 2 figures, 1 tabl
Virginia Commonwealth University Graduate Bulletin
Graduate bulletin for Virginia Commonwealth University for the academic year 2022-2023. It includes information on academic regulations, degree requirements, course offerings, faculty, academic calendar, and tuition and expenses for graduate programs
Building an Understanding of Human Activities in First Person Video using Fuzzy Inference
Activities of Daily Living (ADL’s) are the activities that people perform every day in their home as part of their typical routine. The in-home, automated monitoring of ADL’s has broad utility for intelligent systems that enable independent living for the elderly and mentally or physically disabled individuals. With rising interest in electronic health (e-Health) and mobile health (m-Health) technology, opportunities abound for the integration of activity monitoring systems into these newer forms of healthcare. In this dissertation we propose a novel system for describing ’s based on video collected from a wearable camera. Most in-home activities are naturally defined by interaction with objects. We leverage these object-centric activity definitions to develop a set of rules for a Fuzzy Inference System (FIS) that uses video features and the identification of objects to identify and classify activities. Further, we demonstrate that the use of FIS enhances the reliability of the system and provides enhanced explainability and interpretability of results over popular machine-learning classifiers due to the linguistic nature of fuzzy systems
Human-Robot Collaborations in Industrial Automation
Technology is changing the manufacturing world. For example, sensors are being used to track inventories from the manufacturing floor up to a retail shelf or a customer’s door. These types of interconnected systems have been called the fourth industrial revolution, also known as Industry 4.0, and are projected to lower manufacturing costs. As industry moves toward these integrated technologies and lower costs, engineers will need to connect these systems via the Internet of Things (IoT). These engineers will also need to design how these connected systems interact with humans. The focus of this Special Issue is the smart sensors used in these human–robot collaborations
Detecting head movement using gyroscope data collected via in-ear wearables
Abstract. Head movement is considered as an effective, natural, and simple method to determine the pointing towards an object. Head movement detection technology has significant potentiality in diverse field of applications and studies in this field verify such claim. The application includes fields like users interaction with computers, controlling many devices externally, power wheelchair operation, detecting drivers’ drowsiness while they drive, video surveillance system, and many more. Due to the diversity in application, the method of detecting head movement is also wide-ranging. A number of approaches such as acoustic-based, video-based, computer-vision based, inertial sensor data based head movement detection methods have been introduced by researchers over the years. In order to generate inertial sensor data, various types of wearables are available for example wrist band, smart watch, head-mounted device, and so on.
For this thesis, eSense — a representative earable device — that has built-in inertial sensor to generate gyroscope data is employed. This eSense device is a True Wireless Stereo (TWS) earbud. It is augmented with some key equipment such as a 6-axis inertial motion unit, a microphone, and dual mode Bluetooth (Bluetooth Classic and Bluetooth Low Energy). Features are extracted from gyroscope data collected via eSense device. Subsequently, four machine learning models — Random Forest (RF), Support Vector Machine (SVM), Naïve Bayes, and Perceptron — are applied aiming to detect head movement. The performance of these models is evaluated by four different evaluation metrics such as Accuracy, Precision, Recall, and F1 score. Result shows that machine learning models that have been applied in this thesis are able to detect head movement. Comparing the performance of all these machine learning models, Random Forest performs better than others, it is able to detect head movement with approximately 77% accuracy. The accuracy rate of other three models such as Support Vector Machine, Naïve Bayes, and Perceptron is close to each other, where these models detect head movement with about 42%, 40%, and 39% accuracy, respectively. Besides, the result of other evaluation metrics like Precision, Recall, and F1 score verifies that using these machine learning models, different head direction such as left, right, or straight can be detected
Virginia Commonwealth University Graduate Bulletin
Graduate bulletin for Virginia Commonwealth University for the academic year 2021-2022. It includes information on academic regulations, degree requirements, course offerings, faculty, academic calendar, and tuition and expenses for graduate programs
Reinforcement Learning Approaches in Social Robotics
This article surveys reinforcement learning approaches in social robotics.
Reinforcement learning is a framework for decision-making problems in which an
agent interacts through trial-and-error with its environment to discover an
optimal behavior. Since interaction is a key component in both reinforcement
learning and social robotics, it can be a well-suited approach for real-world
interactions with physically embodied social robots. The scope of the paper is
focused particularly on studies that include social physical robots and
real-world human-robot interactions with users. We present a thorough analysis
of reinforcement learning approaches in social robotics. In addition to a
survey, we categorize existent reinforcement learning approaches based on the
used method and the design of the reward mechanisms. Moreover, since
communication capability is a prominent feature of social robots, we discuss
and group the papers based on the communication medium used for reward
formulation. Considering the importance of designing the reward function, we
also provide a categorization of the papers based on the nature of the reward.
This categorization includes three major themes: interactive reinforcement
learning, intrinsically motivated methods, and task performance-driven methods.
The benefits and challenges of reinforcement learning in social robotics,
evaluation methods of the papers regarding whether or not they use subjective
and algorithmic measures, a discussion in the view of real-world reinforcement
learning challenges and proposed solutions, the points that remain to be
explored, including the approaches that have thus far received less attention
is also given in the paper. Thus, this paper aims to become a starting point
for researchers interested in using and applying reinforcement learning methods
in this particular research field