18,550 research outputs found
Describing Videos by Exploiting Temporal Structure
Recent progress in using recurrent neural networks (RNNs) for image
description has motivated the exploration of their application for video
description. However, while images are static, working with videos requires
modeling their dynamic temporal structure and then properly integrating that
information into a natural language description. In this context, we propose an
approach that successfully takes into account both the local and global
temporal structure of videos to produce descriptions. First, our approach
incorporates a spatial temporal 3-D convolutional neural network (3-D CNN)
representation of the short temporal dynamics. The 3-D CNN representation is
trained on video action recognition tasks, so as to produce a representation
that is tuned to human motion and behavior. Second we propose a temporal
attention mechanism that allows to go beyond local temporal modeling and learns
to automatically select the most relevant temporal segments given the
text-generating RNN. Our approach exceeds the current state-of-art for both
BLEU and METEOR metrics on the Youtube2Text dataset. We also present results on
a new, larger and more challenging dataset of paired video and natural language
descriptions.Comment: Accepted to ICCV15. This version comes with code release and
supplementary materia
Recommended from our members
Explainable and Advisable Learning for Self-driving Vehicles
Deep neural perception and control networks are likely to be a key component of self-driving vehicles. These models need to be explainable - they should provide easy-to-interpret rationales for their behavior - so that passengers, insurance companies, law enforcement, developers, etc., can understand what triggered a particular behavior. Explanations may be triggered by the neural controller, namely introspective explanations, or informed by the neural controller's output, namely rationalizations. Our work has focused on the challenge of generating introspective explanations of deep models for self-driving vehicles. In Chapter 3, we begin by exploring the use of visual explanations. These explanations take the form of real-time highlighted regions of an image that causally influence the network's output (steering control). In the first stage, we use a visual attention model to train a convolution network end-to-end from images to steering angle. The attention model highlights image regions that potentially influence the network's output. Some of these are true influences, but some are spurious. We then apply a causal filtering step to determine which input regions actually influence the output. This produces more succinct visual explanations and more accurately exposes the network's behavior. In Chapter 4, we add an attention-based video-to-text model to produce textual explanations of model actions, e.g. "the car slows down because the road is wet". The attention maps of controller and explanation model are aligned so that explanations are grounded in the parts of the scene that mattered to the controller. We explore two approaches to attention alignment, strong- and weak-alignment. These explainable systems represent an externalization of tacit knowledge. The network's opaque reasoning is simplified to a situation-specific dependence on a visible object in the image. This makes them brittle and potentially unsafe in situations that do not match training data. In Chapter 5, we propose to address this issue by augmenting training data with natural language advice from a human. Advice includes guidance about what to do and where to attend. We present the first step toward advice-giving, where we train an end-to-end vehicle controller that accepts advice. The controller adapts the way it attends to the scene (visual attention) and the control (steering and speed). Further, in Chapter 6, we propose a new approach that learns vehicle control with the help of long-term (global) human advice. Specifically, our system learns to summarize its visual observations in natural language, predict an appropriate action response (e.g. "I see a pedestrian crossing, so I stop"), and predict the controls, accordingly
Introduction: The Third International Conference on Epigenetic Robotics
This paper summarizes the paper and poster contributions
to the Third International Workshop on
Epigenetic Robotics. The focus of this workshop is
on the cross-disciplinary interaction of developmental
psychology and robotics. Namely, the general
goal in this area is to create robotic models of the
psychological development of various behaviors. The
term "epigenetic" is used in much the same sense as
the term "developmental" and while we could call
our topic "developmental robotics", developmental
robotics can be seen as having a broader interdisciplinary
emphasis. Our focus in this workshop is
on the interaction of developmental psychology and
robotics and we use the phrase "epigenetic robotics"
to capture this focus
์๋ฏธ๋ก ์ ํ๊ฒฝ ์ดํด ๊ธฐ๋ฐ ์ธ๊ฐ ๋ก๋ด ํ์
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ)--์์ธ๋ํ๊ต ๋ํ์ :๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ ๋ณด๊ณตํ๋ถ,2020. 2. ์ด๋ฒํฌ.Human-robot cooperation is unavoidable in various applications ranging from manufacturing to field robotics owing to the advantages of adaptability and high flexibility. Especially, complex task planning in large, unconstructed, and uncertain environments can employ the complementary capabilities of human and diverse robots. For a team to be effectives, knowledge regarding team goals and current situation needs to be effectively shared as they affect decision making. In this respect, semantic scene understanding in natural language is one of the most fundamental components for information sharing between humans and heterogeneous robots, as robots can perceive the surrounding environment in a form that both humans and other robots can understand. Moreover, natural-language-based scene understanding can reduce network congestion and improve the reliability of acquired data. Especially, in field robotics, transmission of raw sensor data increases network bandwidth and decreases quality of service. We can resolve this problem by transmitting information in the form of natural language that has encoded semantic representations of environments. In this dissertation, I introduce a human and heterogeneous robot cooperation scheme based on semantic scene understanding. I generate sentences and scene graphs, which is a natural language grounded graph over the detected objects and their relationships, with the graph map generated using a robot mapping algorithm. Subsequently, a framework that can utilize the results for cooperative mission planning of humans and robots is proposed. Experiments were performed to verify the effectiveness of the proposed methods.
This dissertation comprises two parts: graph-based scene understanding and scene understanding based on the cooperation between human and heterogeneous robots. For the former, I introduce a novel natural language processing method using a semantic graph map. Although semantic graph maps have been widely applied to study the perceptual aspects of the environment, such maps do not find extensive application in natural language processing tasks. Several studies have been conducted on the understanding of workspace images in the field of computer vision; in these studies, the sentences were automatically generated, and therefore, multiple scenes have not yet been utilized for sentence generation. A graph-based convolutional neural network, which comprises spectral graph convolution and graph coarsening, and a recurrent neural network are employed to generate sentences attention over graphs. The proposed method outperforms the conventional methods on a publicly available dataset for single scenes and can be utilized for sequential scenes.
Recently, deep learning has demonstrated impressive developments in scene understanding using natural language. However, it has not been extensively applied to high-level processes such as causal reasoning, analogical reasoning, or planning. The symbolic approach that calculates the sequence of appropriate actions by combining the available skills of agents outperforms in reasoning and planning; however, it does not entirely consider semantic knowledge acquisition for human-robot information sharing. An architecture that combines deep learning techniques and symbolic planner for human and heterogeneous robots to achieve a shared goal based on semantic scene understanding is proposed for scene understanding based on human-robot cooperation. In this study, graph-based perception is used for scene understanding. A planning domain definition language (PDDL) planner and JENA-TDB are utilized for mission planning and data acquisition storage, respectively. The effectiveness of the proposed method is verified in two situations: a mission failure, in which the dynamic environment changes, and object detection in a large and unseen environment.์ธ๊ฐ๊ณผ ์ด์ข
๋ก๋ด ๊ฐ์ ํ์
์ ๋์ ์ ์ฐ์ฑ๊ณผ ์ ์๋ ฅ์ ๋ณด์ผ ์ ์๋ค๋ ์ ์์ ์ ์กฐ์
์์ ํ๋ ๋ก๋ณดํฑ์ค๊น์ง ๋ค์ํ ๋ถ์ผ์์ ํ์ฐ์ ์ด๋ค. ํนํ, ์๋ก ๋ค๋ฅธ ๋ฅ๋ ฅ์ ์ง๋ ๋ก๋ด๋ค๊ณผ ์ธ๊ฐ์ผ๋ก ๊ตฌ์ฑ๋ ํ๋์ ํ์ ๋๊ณ ์ ํํ๋์ง ์์ ๊ณต๊ฐ์์ ์๋ก์ ๋ฅ๋ ฅ์ ๋ณด์ํ๋ฉฐ ๋ณต์กํ ์๋ฌด ์ํ์ ๊ฐ๋ฅํ๊ฒ ํ๋ค๋ ์ ์์ ํฐ ์ฅ์ ์ ๊ฐ๋๋ค. ํจ์จ์ ์ธ ํ ํ์ด ๋๊ธฐ ์ํด์๋, ํ์ ๊ณตํต๋ ๋ชฉํ ๋ฐ ๊ฐ ํ์์ ํ์ฌ ์ํฉ์ ๊ดํ ์ ๋ณด๋ฅผ ์ค์๊ฐ์ผ๋ก ๊ณต์ ํ ์ ์์ด์ผ ํ๋ฉฐ ํจ๊ป ์์ฌ ๊ฒฐ์ ์ ํ ์ ์์ด์ผ ํ๋ค. ์ด๋ฌํ ๊ด์ ์์, ์์ฐ์ด๋ฅผ ํตํ ์๋ฏธ๋ก ์ ํ๊ฒฝ ์ดํด๋ ์ธ๊ฐ๊ณผ ์๋ก ๋ค๋ฅธ ๋ก๋ด๋ค์ด ๋ชจ๋ ์ดํดํ ์ ์๋ ํํ๋ก ํ๊ฒฝ์ ์ธ์งํ๋ค๋ ์ ์์ ๊ฐ์ฅ ํ์์ ์ธ ์์์ด๋ค. ๋ํ, ์ฐ๋ฆฌ๋ ์์ฐ์ด ๊ธฐ๋ฐ ํ๊ฒฝ ์ดํด๋ฅผ ํตํด ๋คํธ์ํฌ ํผ์ก์ ํผํจ์ผ๋ก์จ ํ๋ํ ์ ๋ณด์ ์ ๋ขฐ์ฑ์ ๋์ผ ์ ์๋ค. ํนํ, ๋๋์ ์ผ์ ๋ฐ์ดํฐ ์ ์ก์ ์ํด ๋คํธ์ํฌ ๋์ญํญ์ด ์ฆ๊ฐํ๊ณ ํต์ QoS (Quality of Service) ์ ๋ขฐ๋๊ฐ ๊ฐ์ํ๋ ๋ฌธ์ ๊ฐ ๋น๋ฒํ ๋ฐ์ํ๋ ํ๋ ๋ก๋ณดํฑ์ค ์์ญ์์๋ ์๋ฏธ๋ก ์ ํ๊ฒฝ ์ ๋ณด์ธ ์์ฐ์ด๋ฅผ ์ ์กํจ์ผ๋ก์จ ํต์ ๋์ญํญ์ ๊ฐ์์ํค๊ณ ํต์ QoS ์ ๋ขฐ๋๋ฅผ ์ฆ๊ฐ์ํฌ ์ ์๋ค. ๋ณธ ํ์ ๋
ผ๋ฌธ์์๋ ํ๊ฒฝ์ ์๋ฏธ๋ก ์ ์ดํด ๊ธฐ๋ฐ ์ธ๊ฐ ๋ก๋ด ํ๋ ๋ฐฉ๋ฒ์ ๋ํด ์๊ฐํ๋ค. ๋จผ์ , ๋ก๋ด์ ์ง๋ ์์ฑ ์๊ณ ๋ฆฌ์ฆ์ ํตํด ํ๋ํ ๊ทธ๋ํ ์ง๋๋ฅผ ์ด์ฉํ์ฌ ์์ฐ์ด ๋ฌธ์ฅ๊ณผ ๊ฒ์ถํ ๊ฐ์ฒด ๋ฐ ๊ฐ ๊ฐ์ฒด ๊ฐ์ ๊ด๊ณ๋ฅผ ์์ฐ์ด ๋จ์ด๋ก ํํํ๋ ๊ทธ๋ํ๋ฅผ ์์ฑํ๋ค. ๊ทธ๋ฆฌ๊ณ ์์ฐ์ด ์ฒ๋ฆฌ ๊ฒฐ๊ณผ๋ฅผ ์ด์ฉํ์ฌ ์ธ๊ฐ๊ณผ ๋ค์ํ ๋ก๋ด๋ค์ด ํจ๊ป ํ์
ํ์ฌ ์๋ฌด๋ฅผ ์ํํ ์ ์๋๋ก ํ๋ ํ๋ ์์ํฌ๋ฅผ ์ ์ํ๋ค.
๋ณธ ํ์ ๋
ผ๋ฌธ์ ํฌ๊ฒ ๊ทธ๋ํ๋ฅผ ์ด์ฉํ ์๋ฏธ๋ก ์ ํ๊ฒฝ ์ดํด์ ์๋ฏธ๋ก ์ ํ๊ฒฝ ์ดํด๋ฅผ ํตํ ์ธ๊ฐ๊ณผ ์ด์ข
๋ก๋ด ๊ฐ์ ํ์
๋ฐฉ๋ฒ์ผ๋ก ๊ตฌ์ฑ๋๋ค. ๋จผ์ , ๊ทธ๋ํ๋ฅผ ์ด์ฉํ ์๋ฏธ๋ก ์ ํ๊ฒฝ ์ดํด ๋ถ๋ถ์์๋ ์๋ฏธ๋ก ์ ๊ทธ๋ํ ์ง๋๋ฅผ ์ด์ฉํ ์๋ก์ด ์์ฐ์ด ์ฒ๋ฆฌ ๋ฐฉ๋ฒ์ ๋ํด ์๊ฐํ๋ค. ์๋ฏธ๋ก ์ ๊ทธ๋ํ ์ง๋ ์์ฑ ๋ฐฉ๋ฒ์ ๋ก๋ด์ ํ๊ฒฝ ์ธ์ง ์ธก๋ฉด์์ ๋ง์ด ์ฐ๊ตฌ๋์์ง๋ง ์ด๋ฅผ ์ด์ฉํ ์์ฐ์ด ์ฒ๋ฆฌ ๋ฐฉ๋ฒ์ ๊ฑฐ์ ์ฐ๊ตฌ๋์ง ์์๋ค. ๋ฐ๋ฉด ์ปดํจํฐ ๋น์ ๋ถ์ผ์์๋ ์ด๋ฏธ์ง๋ฅผ ์ด์ฉํ ํ๊ฒฝ ์ดํด ์ฐ๊ตฌ๊ฐ ๋ง์ด ์ด๋ฃจ์ด์ก์ง๋ง, ์ฐ์์ ์ธ ์ฅ๋ฉด๋ค์ ๋ค๋ฃจ๋๋ฐ๋ ํ๊ณ์ ์ด ์๋ค. ๋ฐ๋ผ์ ์ฐ๋ฆฌ๋ ๊ทธ๋ํ ์คํํธ๋ผ ์ด๋ก ์ ๊ธฐ๋ฐํ ๊ทธ๋ํ ์ปจ๋ณผ๋ฃจ์
๊ณผ ๊ทธ๋ํ ์ถ์ ๋ ์ด์ด๋ก ๊ตฌ์ฑ๋ ๊ทธ๋ํ ์ปจ๋ณผ๋ฃจ์
์ ๊ฒฝ๋ง ๋ฐ ์ํ ์ ๊ฒฝ๋ง์ ์ด์ฉํ์ฌ ๊ทธ๋ํ๋ฅผ ์ค๋ช
ํ๋ ๋ฌธ์ฅ์ ์์ฑํ๋ค. ์ ์ํ ๋ฐฉ๋ฒ์ ๊ธฐ์กด์ ๋ฐฉ๋ฒ๋ค๋ณด๋ค ํ ์ฅ๋ฉด์ ๋ํด ํฅ์๋ ์ฑ๋ฅ์ ๋ณด์์ผ๋ฉฐ ์ฐ์๋ ์ฅ๋ฉด๋ค์ ๋ํด์๋ ์ฑ๊ณต์ ์ผ๋ก ์์ฐ์ด ๋ฌธ์ฅ์ ์์ฑํ๋ค.
์ต๊ทผ ๋ฅ๋ฌ๋์ ์์ฐ์ด ๊ธฐ๋ฐ ํ๊ฒฝ ์ธ์ง์ ์์ด ๊ธ์๋๋ก ํฐ ๋ฐ์ ์ ์ด๋ฃจ์๋ค. ํ์ง๋ง ์ธ๊ณผ ์ถ๋ก , ์ ์ถ์ ์ถ๋ก , ์๋ฌด ๊ณํ๊ณผ ๊ฐ์ ๋์ ์์ค์ ํ๋ก์ธ์ค์๋ ์ ์ฉ์ด ํ๋ค๋ค. ๋ฐ๋ฉด ์๋ฌด๋ฅผ ์ํํ๋ ๋ฐ ์์ด ๊ฐ ์์ด์ ํธ์ ๋ฅ๋ ฅ์ ๋ง๊ฒ ํ์๋ค์ ์์๋ฅผ ๊ณ์ฐํด์ฃผ๋ ์์ง์ ์ ๊ทผ๋ฒ(symbolic approach)์ ์ถ๋ก ๊ณผ ์๋ฌด ๊ณํ์ ์์ด ๋ฐ์ด๋ ์ฑ๋ฅ์ ๋ณด์ด์ง๋ง ์ธ๊ฐ๊ณผ ๋ก๋ด๋ค ์ฌ์ด์ ์๋ฏธ๋ก ์ ์ ๋ณด ๊ณต์ ๋ฐฉ๋ฒ์ ๋ํด์๋ ๊ฑฐ์ ๋ค๋ฃจ์ง ์๋๋ค. ๋ฐ๋ผ์, ์ธ๊ฐ๊ณผ ์ด์ข
๋ก๋ด ๊ฐ์ ํ์
๋ฐฉ๋ฒ ๋ถ๋ถ์์๋ ๋ฅ๋ฌ๋ ๊ธฐ๋ฒ๋ค๊ณผ ์์ง์ ํ๋๋(symbolic planner)๋ฅผ ์ฐ๊ฒฐํ๋ ํ๋ ์์ํฌ๋ฅผ ์ ์ํ์ฌ ์๋ฏธ๋ก ์ ์ดํด๋ฅผ ํตํ ์ธ๊ฐ ๋ฐ ์ด์ข
๋ก๋ด ๊ฐ์ ํ์
์ ๊ฐ๋ฅํ๊ฒ ํ๋ค. ์ฐ๋ฆฌ๋ ์๋ฏธ๋ก ์ ์ฃผ๋ณ ํ๊ฒฝ ์ดํด๋ฅผ ์ํด ์ด์ ๋ถ๋ถ์์ ์ ์ํ ๊ทธ๋ํ ๊ธฐ๋ฐ ์์ฐ์ด ๋ฌธ์ฅ ์์ฑ์ ์ํํ๋ค. PDDL ํ๋๋์ JENA-TDB๋ ๊ฐ๊ฐ ์๋ฌด ๊ณํ ๋ฐ ์ ๋ณด ํ๋ ์ ์ฅ์๋ก ์ฌ์ฉํ๋ค. ์ ์ํ ๋ฐฉ๋ฒ์ ํจ์ฉ์ฑ์ ์๋ฎฌ๋ ์ด์
์ ํตํด ๋ ๊ฐ์ง ์ํฉ์ ๋ํด์ ๊ฒ์ฆํ๋ค. ํ๋๋ ๋์ ํ๊ฒฝ์์ ์๋ฌด ์คํจ ์ํฉ์ด๋ฉฐ ๋ค๋ฅธ ํ๋๋ ๋์ ๊ณต๊ฐ์์ ๊ฐ์ฒด๋ฅผ ์ฐพ๋ ์ํฉ์ด๋ค.1 Introduction 1
1.1 Background and Motivation 1
1.2 Literature Review 5
1.2.1 Natural Language-Based Human-Robot Cooperation 5
1.2.2 Artificial Intelligence Planning 5
1.3 The Problem Statement 10
1.4 Contributions 11
1.5 Dissertation Outline 12
2 Natural Language-Based Scene Graph Generation 14
2.1 Introduction 14
2.2 Related Work 16
2.3 Scene Graph Generation 18
2.3.1 Graph Construction 19
2.3.2 Graph Inference 19
2.4 Experiments 22
2.5 Summary 25
3 Language Description with 3D Semantic Graph 26
3.1 Introduction 26
3.2 Related Work 26
3.3 Natural Language Description 29
3.3.1 Preprocess 29
3.3.2 Graph Feature Extraction 33
3.3.3 Natural Language Description with Graph Features 34
3.4 Experiments 35
3.5 Summary 42
4 Natural Question with Semantic Graph 43
4.1 Introduction 43
4.2 Related Work 45
4.3 Natural Question Generation 47
4.3.1 Preprocess 49
4.3.2 Graph Feature Extraction 50
4.3.3 Natural Question with Graph Features 51
4.4 Experiments 52
4.5 Summary 58
5 PDDL Planning with Natural Language 59
5.1 Introduction 59
5.2 Related Work 60
5.3 PDDL Planning with Incomplete World Knowledge 61
5.3.1 Natural Language Process for PDDL Planning 63
5.3.2 PDDL Planning System 64
5.4 Experiments 65
5.5 Summary 69
6 PDDL Planning with Natural Language-Based Scene Understanding 70
6.1 Introduction 70
6.2 Related Work 74
6.3 A Framework for Heterogeneous Multi-Agent Cooperation 77
6.3.1 Natural Language-Based Cognition 78
6.3.2 Knowledge Engine 80
6.3.3 PDDL Planning Agent 81
6.4 Experiments 82
6.4.1 Experiment Setting 82
6.4.2 Scenario 84
6.4.3 Results 87
6.5 Summary 91
7 Conclusion 92Docto
Using fuzzy logic to integrate neural networks and knowledge-based systems
Outlined here is a novel hybrid architecture that uses fuzzy logic to integrate neural networks and knowledge-based systems. The author's approach offers important synergistic benefits to neural nets, approximate reasoning, and symbolic processing. Fuzzy inference rules extend symbolic systems with approximate reasoning capabilities, which are used for integrating and interpreting the outputs of neural networks. The symbolic system captures meta-level information about neural networks and defines its interaction with neural networks through a set of control tasks. Fuzzy action rules provide a robust mechanism for recognizing the situations in which neural networks require certain control actions. The neural nets, on the other hand, offer flexible classification and adaptive learning capabilities, which are crucial for dynamic and noisy environments. By combining neural nets and symbolic systems at their system levels through the use of fuzzy logic, the author's approach alleviates current difficulties in reconciling differences between low-level data processing mechanisms of neural nets and artificial intelligence systems
- โฆ