66 research outputs found
Bias in Emotion Recognition with ChatGPT
This technical report explores the ability of ChatGPT in recognizing emotions
from text, which can be the basis of various applications like interactive
chatbots, data annotation, and mental health analysis. While prior research has
shown ChatGPT's basic ability in sentiment analysis, its performance in more
nuanced emotion recognition is not yet explored. Here, we conducted experiments
to evaluate its performance of emotion recognition across different datasets
and emotion labels. Our findings indicate a reasonable level of reproducibility
in its performance, with noticeable improvement through fine-tuning. However,
the performance varies with different emotion labels and datasets, highlighting
an inherent instability and possible bias. The choice of dataset and emotion
labels significantly impacts ChatGPT's emotion recognition performance. This
paper sheds light on the importance of dataset and label selection, and the
potential of fine-tuning in enhancing ChatGPT's emotion recognition
capabilities, providing a groundwork for better integration of emotion analysis
in applications using ChatGPT.Comment: 5 pages, 4 figures, 6 table
Interactive Task Encoding System for Learning-from-Observation
We introduce a practical pipeline that interactively encodes multimodal human
demonstrations for robot teaching. This pipeline is designed as an input system
for a framework called Learning-from-Observation (LfO), which aims to program
household robots with manipulative tasks through few-shots human demonstration
without coding. While most previous LfO systems run with visual demonstration,
recent research on robot teaching has shown the effectiveness of verbal
instruction in making recognition robust and teaching interactive. To the best
of our knowledge, however, no LfO system has yet been proposed that utilizes
both verbal instruction and interaction, namely \textit{multimodal LfO}. This
paper proposes the interactive task encoding system (ITES) as an input pipeline
for multimodal LfO. ITES assumes that the user teaches step-by-step, pausing
hand movements in order to match the granularity of human instructions with the
granularity of robot execution. ITES recognizes tasks based on step-by-step
verbal instructions that accompany the hand movements. Additionally, the
recognition is made robust through interactions with the user. We test ITES on
a real robot and show that the user can successfully teach multiple operations
through multimodal demonstrations. The results suggest the usefulness of ITES
for multimodal LfO. The source code is available at
https://github.com/microsoft/symbolic-robot-teaching-interface.Comment: 7 pages, 10 figures. Last updated January 24st, 202
GPT Models Meet Robotic Applications: Co-Speech Gesturing Chat System
This technical paper introduces a chatting robot system that utilizes recent
advancements in large-scale language models (LLMs) such as GPT-3 and ChatGPT.
The system is integrated with a co-speech gesture generation system, which
selects appropriate gestures based on the conceptual meaning of speech. Our
motivation is to explore ways of utilizing the recent progress in LLMs for
practical robotic applications, which benefits the development of both chatbots
and LLMs. Specifically, it enables the development of highly responsive chatbot
systems by leveraging LLMs and adds visual effects to the user interface of
LLMs as an additional value. The source code for the system is available on
GitHub for our in-house robot
(https://github.com/microsoft/LabanotationSuite/tree/master/MSRAbotChatSimulation)
and GitHub for Toyota HSR
(https://github.com/microsoft/GPT-Enabled-HSR-CoSpeechGestures)
GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration
We introduce a pipeline that enhances a general-purpose Vision Language
Model, GPT-4V(ision), by integrating observations of human actions to
facilitate robotic manipulation. This system analyzes videos of humans
performing tasks and creates executable robot programs that incorporate
affordance insights. The computation starts by analyzing the videos with GPT-4V
to convert environmental and action details into text, followed by a
GPT-4-empowered task planner. In the following analyses, vision systems
reanalyze the video with the task plan. Object names are grounded using an
open-vocabulary object detector, while focus on the hand-object relation helps
to detect the moment of grasping and releasing. This spatiotemporal grounding
allows the vision systems to further gather affordance data (e.g., grasp type,
way points, and body postures). Experiments across various scenarios
demonstrate this method's efficacy in achieving real robots' operations from
human demonstrations in a zero-shot manner. The prompts of GPT-4V/GPT-4 are
available at this project page:
https://microsoft.github.io/GPT4Vision-Robot-Manipulation-Prompts/Comment: 8 pages, 10 figures, 1 table. Last updated on November 20th, 202
ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application
This paper demonstrates how OpenAI's ChatGPT can be used in a few-shot
setting to convert natural language instructions into an executable robot
action sequence. The paper proposes easy-to-customize input prompts for ChatGPT
that meet common requirements in practical applications, such as easy
integration with robot execution systems and applicability to various
environments while minimizing the impact of ChatGPT's token limit. The prompts
encourage ChatGPT to output a sequence of predefined robot actions, represent
the operating environment in a formalized style, and infer the updated state of
the operating environment. Experiments confirmed that the proposed prompts
enable ChatGPT to act according to requirements in various environments, and
users can adjust ChatGPT's output with natural language feedback for safe and
robust operation. The proposed prompts and source code are open-source and
publicly available at
https://github.com/microsoft/ChatGPT-Robot-Manipulation-PromptsComment: 17 figures. Last updated April 11th, 202
Development and Practice of a Junior High School History Unit to Promote the Reconstruction of “Historical Significance”: The case of the unit “What criteria should be used to decide the boldface in textbooks?”
This study proposes a history unit plan aimed at the reconstruction of historical significance of junior high school students. In some cases, historically significant items are boldfaced in textbooks by junior high school students. However, as citizens, it is not enough to judge whether or not historical significance is important in a textbook. Therefore, in this study, The Historical Thinking Project (Canada) and The Critical Thinking Consortium’s unit plan were used as a basis for the development and practice of a history unit. This unit aims to reconstruct “historical significance” by 1) understanding the diversity of criteria of “historical significance,” 2) progressing “historical significance,” and 3) reconstructing “historical significance” from a broader perspective. In particular, the textbook’s boldface type was used as a material for the performance task, “Let’s submit an opinion about the boldface type to the textbook writer”, in order to reconstruct historical significance. The outcome of this research hints at future possibilities of bigger reforms in the history curriculum
Constraint-aware Policy for Compliant Manipulation
Robot manipulation in a physically-constrained environment requires compliant
manipulation. Compliant manipulation is a manipulation skill to adjust hand
motion based on the force imposed by the environment. Recently, reinforcement
learning (RL) has been applied to solve household operations involving
compliant manipulation. However, previous RL methods have primarily focused on
designing a policy for a specific operation that limits their applicability and
requires separate training for every new operation. We propose a
constraint-aware policy that is applicable to various unseen manipulations by
grouping several manipulations together based on the type of physical
constraint involved. The type of physical constraint determines the
characteristic of the imposed force direction; thus, a generalized policy is
trained in the environment and reward designed on the basis of this
characteristic. This paper focuses on two types of physical constraints:
prismatic and revolute joints. Experiments demonstrated that the same policy
could successfully execute various compliant-manipulation operations, both in
the simulation and reality. We believe this study is the first step toward
realizing a generalized household-robot
- …