224 research outputs found
Semantic-Aware Scene Recognition
Scene recognition is currently one of the top-challenging research fields in
computer vision. This may be due to the ambiguity between classes: images of
several scene classes may share similar objects, which causes confusion among
them. The problem is aggravated when images of a particular scene class are
notably different. Convolutional Neural Networks (CNNs) have significantly
boosted performance in scene recognition, albeit it is still far below from
other recognition tasks (e.g., object or image recognition). In this paper, we
describe a novel approach for scene recognition based on an end-to-end
multi-modal CNN that combines image and context information by means of an
attention module. Context information, in the shape of semantic segmentation,
is used to gate features extracted from the RGB image by leveraging on
information encoded in the semantic representation: the set of scene objects
and stuff, and their relative locations. This gating process reinforces the
learning of indicative scene content and enhances scene disambiguation by
refocusing the receptive fields of the CNN towards them. Experimental results
on four publicly available datasets show that the proposed approach outperforms
every other state-of-the-art method while significantly reducing the number of
network parameters. All the code and data used along this paper is available at
https://github.com/vpulab/Semantic-Aware-Scene-RecognitionComment: Paper submitted for publication to Elsevier Pattern Recognition
journa
Deep learning in remote sensing: a review
Standing at the paradigm shift towards data-intensive science, machine
learning techniques are becoming increasingly important. In particular, as a
major breakthrough in the field, deep learning has proven as an extremely
powerful tool in many fields. Shall we embrace deep learning as the key to all?
Or, should we resist a 'black-box' solution? There are controversial opinions
in the remote sensing community. In this article, we analyze the challenges of
using deep learning for remote sensing data analysis, review the recent
advances, and provide resources to make deep learning in remote sensing
ridiculously simple to start with. More importantly, we advocate remote sensing
scientists to bring their expertise into deep learning, and use it as an
implicit general model to tackle unprecedented large-scale influential
challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin
๋ถ๋ถ ์ ๋ณด๋ฅผ ์ด์ฉํ ์๊ฐ ๋ฐ์ดํฐ์ ๊ตฌ์กฐํ ๋ ์ดํด: ํฌ์์ฑ, ๋ฌด์์์ฑ, ์ฐ๊ด์ฑ, ๊ทธ๋ฆฌ๊ณ ๋ฅ ๋คํธ์ํฌ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ)-- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ปดํจํฐ๊ณตํ๋ถ, 2019. 2. Oh, Songhwai.For a deeper understanding of visual data, a relationship between local parts and a global scene has to be carefully examined. Examples of such relationships related to vision problems include but not limited to detecting a region of interest in the scene, classifying an image based on limited visual cues, and synthesizing
new images conditioned on the local or global inputs. In this thesis, we aim to learn the relationship and demonstrate its importance by showing that it is one of critical keys to address four challenging vision problems mentioned above. For each problem, we construct deep neural networks that suit for each task.
The first problem considered in the thesis is object detection. It requires not only finding local patches that look like target objects conditioned on the context of input scene but also comparing local patches themselves to assign a single detection for each object. To this end, we introduce individualness of detection candidates as a complement to objectness for object detection. The individualness assigns a single detection for each object out of raw detection candidates given by either object proposals or sliding windows. We show that conventional approaches, such as non-maximum suppression, are sub-optimal since they suppress nearby detections using only detection scores. We use a determinantal point process combined with the individualness to optimally select final detections. It models each detection using its quality and similarity to other detections based on the individualness. Then, detections with high detection scores and low correlations
are selected by measuring their probability using a determinant of a matrix, which is composed of quality terms on the diagonal entries and similarities on the off-diagonal entries. For concreteness, we focus on the pedestrian detection problem as it is one of the most challenging problems due to frequent occlusions and unpredictable human motions. Experimental results demonstrate that the proposed algorithm works favorably against existing methods, including non-maximal suppression and a quadratic unconstrained binary optimization based method.
For a second problem, we classify images based on observations of local patches. More specifically, we consider the problem of estimating the head pose and body orientation of a person from a low-resolution image. Under this setting, it is difficult to reliably extract facial features or detect body parts. We propose a
convolutional random projection forest (CRPforest) algorithm for these tasks. A convolutional random projection network (CRPnet) is used at each node of the forest. It maps an input image to a high-dimensional feature space using a rich filter bank. The filter bank is designed to generate sparse responses so that they
can be efficiently computed by compressive sensing. A sparse random projection matrix can capture most essential information contained in the filter bank without using all the filters in it. Therefore, the CRPnet is fast, e.g., it requires 0.04ms to process an image of 50ร50 pixels, due to the small number of convolutions (e.g., 0.01% of a layer of a neural network) at the expense of less than 2% accuracy. The overall forest estimates head and body pose well on benchmark datasets, e.g., over 98% on the HIIT dataset, while requiring at 3.8ms without using a GPU. Extensive experiments on challenging datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in low-resolution images
with noise, occlusion, and motion blur.
Then, we shift our attention to image synthesis based on the local-global relationship. Learning how to synthesize and place object instances into an image (semantic map) based on the scene context is a challenging and interesting problem in vision and learning. On one hand, solving this problem requires a joint
decision of (a) generating an object mask from a certain class at a plausible scale, location, and shape, and (b) inserting the object instance mask into an existing scene so that the synthesized content is semantically realistic. On the other hand, such a model can synthesize realistic outputs to potentially facilitate numerous
image editing and scene parsing tasks. In this paper, we propose an end-to-end trainable neural network that can synthesize and insert object instances into an image via a semantic map. The proposed network contains two generative modules that determine where the inserted object should be (i.e., location and scale) and what the object shape (and pose) should look like. The two modules are connected together with a spatial transformation network and jointly trained and optimized in a purely data-driven way. Specifically, we propose a novel network architecture with parallel supervised and unsupervised paths to guarantee diverse results. We show that the proposed network architecture learns the context-aware distribution of the location and shape of object instances to be inserted, and it can generate realistic and statistically meaningful object instances that simultaneously address the where and what sub-problems.
As the final topic of the thesis, we introduce a new vision problem: generating an image based on a small number of key local patches without any geometric prior. In this work, key local patches are defined as informative regions of the target object or scene. This is a challenging problem since it requires generating
realistic images and predicting locations of parts at the same time. We construct adversarial networks to tackle this problem. A generator network generates a fake image as well as a mask based on the encoder-decoder framework. On the other hand, a discriminator network aims to detect fake images. The network is trained
with three losses to consider spatial, appearance, and adversarial information. The spatial loss determines whether the locations of predicted parts are correct. Input patches are restored in the output image without much modification due to the appearance loss. The adversarial loss ensures output images are realistic. The proposed network is trained without supervisory signals since no labels of key parts are required. Experimental results on seven datasets demonstrate that the proposed algorithm performs favorably on challenging objects and scenes.์๊ฐ ๋ฐ์ดํฐ๋ฅผ ์ฌ๋ ๊น๊ฒ ์ดํดํ๊ธฐ ์ํด์๋ ์ ์ฒด ์์ญ๊ณผ ๋ถ๋ถ ์์ญ๋ค ๊ฐ์ ์ฐ๊ด์ฑ ํน์ ์ํธ ์์ฉ์ ์ฃผ์ ๊น๊ฒ ๋ถ์ํ๋ ๊ฒ์ด ํ์ํ๋ค. ์ด์ ๊ด๋ จ๋ ์ปดํจํฐ ๋น์ ๋ฌธ์ ๋ก๋ ์ด๋ฏธ์ง์์ ์ํ๋ ๋ถ๋ถ์ ๊ฒ์ถํ๋ค๋์ง, ์ ํ๋ ๋ถ๋ถ์ ์ธ ์ ๋ณด๋ง์ผ๋ก ์ ์ฒด ์ด๋ฏธ์ง๋ฅผ ํ๋ณ ํ๊ฑฐ๋, ํน์ ์ฃผ์ด์ง ์ ๋ณด๋ก๋ถํฐ ์ํ๋ ์ด๋ฏธ์ง๋ฅผ ์์ฑํ๋ ๋ฑ์ด ์๋ค. ์ด ๋
ผ๋ฌธ์์๋, ๊ทธ ์ฐ๊ด์ฑ์ ํ์ตํ๋ ๊ฒ์ด ์์ ์ธ๊ธ๋ ๋ค์ํ ๋ฌธ์ ๋ค์ ํธ๋๋ฐ ์ค์ํ ์ด์ ๊ฐ ๋๋ค๋ ๊ฒ์ ๋ณด์ฌ์ฃผ๊ณ ์ ํ๋ค. ์ด์ ๋ํด์, ๊ฐ๊ฐ์ ๋ฌธ์ ์ ์๋ง๋ ๋ฅ ๋คํธ์ํฌ์ ๋์์ธ ๋ํ ํ ์ํ๊ณ ์ ํ๋ค.
์ฒซ ์ฃผ์ ๋ก, ๋ฌผ์ฒด ๊ฒ์ถ ๋ฐฉ์์ ๋ํด ๋ถ์ํ๊ณ ์ ํ๋ค. ์ด ๋ฌธ์ ๋ ํ๊ฒ ๋ฌผ์ฒด์ ๋น์ทํ๊ฒ ์๊ธด ์์ญ์ ์ฐพ์์ผ ํ ๋ฟ ์๋๋ผ, ์ฐพ์์ง ์์ญ๋ค ์ฌ์ด์ ์ฐ๊ด์ฑ์ ๋ถ์ํจ์ผ๋ก์จ ๊ฐ ๋ฌผ์ฒด ๋ง๋ค ๋จ ํ๋์ ๊ฒ์ถ ๊ฒฐ๊ณผ๋ฅผ ํ ๋น์์ผ์ผ ํ๋ค. ์ด๋ฅผ ์ํด, ์ฐ๋ฆฌ๋ objectness์ ๋ํ ๋ณด์์ผ๋ก์จ individualness๋ผ๋ ๊ฐ๋
์ ์ ์ ํ์๋ค. ์ด๋ ์์์ ๋ฐฉ์์ผ๋ก ์ป์ด์ง ํ๋ณด ๋ฌผ์ฒด ์์ญ ์ค ํ๋์ฉ์ ๋ฌผ์ฒด ๋ง๋ค ํ ๋นํ๋๋ฐ ์ฐ์ด๋๋ฐ, ์ด๊ฒ์ ๊ฒ์ถ ์ค์ฝ์ด๋ง์ ๋ฐํ์ผ๋ก ํ์ฒ๋ฆฌ๋ฅผ ํ๋ ๊ธฐ์กด์ non-maximum suppression ๋ฑ์ ๋ฐฉ์์ด sub-optimal ๊ฒฐ๊ณผ๋ฅผ ์ป์ ์ ๋ฐ์ ์๊ธฐ ๋๋ฌธ์ ์ด๋ฅผ ๊ฐ์ ํ๊ณ ์ ๋์
ํ์๋ค. ์ฐ๋ฆฌ๋ ํ๋ณด ๋ฌผ์ฒด ์์ญ์ผ๋ก๋ถํฐ ์ต์ ์ ์์ญ๋ค์ ์ ํํ๊ธฐ ์ํด์, determinantal point process๋ผ๋ random process์ ์ผ์ข
์ ์ฌ์ฉํ์๋ค. ์ด๊ฒ์ ๋จผ์ ๊ฐ๊ฐ์ ๊ฒ์ถ ๊ฒฐ๊ณผ๋ฅผ ๊ทธ๊ฒ์ quality(๊ฒ์ถ ์ค์ฝ์ด)์ ๋ค๋ฅธ ๊ฒ์ถ ๊ฒฐ๊ณผ๋ค ์ฌ์ด์ individualness๋ฅผ ๋ฐํ์ผ
๋ก ๊ณ์ฐ๋ similarity(์๊ด ๊ด๊ณ)๋ฅผ ์ด์ฉํด ๋ชจ๋ธ๋ง ํ๋ค. ๊ทธ ํ, ๊ฐ๊ฐ์ ๊ฒ์ถ ๊ฒฐ๊ณผ๊ฐ ์ ํ๋ ํ๋ฅ ์ quality์ similarity์ ๊ธฐ๋ฐํ ์ปค๋์ determinant๋ก ํํํ๋ค. ๊ทธ ์ปค๋์ diagonal ๋ถ๋ถ์๋ quality๊ฐ ๋ค์ด๊ฐ๊ณ , off-diagonal์๋ similarity๊ฐ ๋์
๋๋ค. ๋ฐ๋ผ์, ์ด๋ค ๊ฒ์ถ ํ๋ณด๊ฐ ์ต์ข
๊ฒ์ถ ๊ฒฐ๊ณผ๋ก ์ ํ๋ ํ๋ฅ ์ด ๋์์ง๊ธฐ ์ํด์๋, ๋์ quality๋ฅผ ๊ฐ์ง๊ณผ ๋์์ ๋ค๋ฅธ ๊ฒ์ถ ๊ฒฐ๊ณผ๋ค๊ณผ ๋ฎ์ similarity๋ฅผ ๊ฐ์ ธ์ผ ํ๋ค. ์ด ๋
ผ๋ฌธ์์๋ ๋ณดํ์ ๊ฒ์ถ์ ์ง์คํ์๋๋ฐ, ์ด๋ ๋ณดํ์ ๊ฒ์ถ์ด ์ค์ํ ๋ฌธ์ ์ด๋ฉด์๋, ๋ค๋ฅธ ๋ฌผ์ฒด๋ค์ ๋นํด ์์ฃผ ๊ฐ๋ ค์ง๊ณ ๋ค์ํ ์์ง์์ ๋ณด์ด๋ ๊ฒ์ถ์ด ์ด๋ ค์ด ๋ฌผ์ฒด์ด๊ธฐ ๋๋ฌธ์ด๋ค. ์คํ ๊ฒฐ๊ณผ๋ ์ ์ํ ๋ฐฉ๋ฒ์ด non-maximum suppression ํน์ quadratic unconstrained binary optimization ๋ฐฉ์๋ค ๋ณด๋ค ์ฐ์ํจ์ ๋ณด์ฌ์ฃผ์๋ค.
๋ค์ ๋ฌธ์ ๋ก๋, ๋ถ๋ถ ์ ๋ณด๋ฅผ ์ด์ฉํด์ ์ ์ฒด ์ด๋ฏธ์ง๋ฅผ classifyํ๋ ๊ฒ์ ๊ณ ๋ คํ๋ค. ๋ค์ํ classification ๋ฌธ์ ์ค์, ์ด ๋
ผ๋ฌธ์์๋ ์ ํด์๋ ์ด๋ฏธ์ง๋ก๋ถํฐ ์ฌ๋์ ๋จธ๋ฆฌ์ ๋ชธ์ด ํฅํ๋ ๋ฐฉํฅ์ ์์๋ด๋ ๋ฌธ์ ์ ์ง์คํ์๋ค. ์ด ๊ฒฝ์ฐ์๋, ๋, ์ฝ, ์
๋ฑ์ ์ฐพ๊ฑฐ๋, ๋ชธ์ ํํธ๋ฅผ ์ ํํ ์์๋ด๋ ๊ฒ์ด ์ด๋ ต๋ค. ์ด๋ฅผ ์ํด, ์ฐ๋ฆฌ๋ convolutional random projection forest (CRPforest)๋ผ๋ ๋ฐฉ์์ ์ ์ํ์๋ค. ์ด forest์ ๊ฐ๊ฐ์ node ์์๋ convolutional random projection network (CRPnet)์ด ๋ค์ด์๋๋ฐ, ์ด๋ ๋ค์ํ ํํฐ๋ฅผ ์ด์ฉํด์ ์ธํ ์ด๋ฏธ์ง๋ฅผ ๋์ ์ฐจ์์ผ๋ก mapping ํ๋ค. ์ด๋ฅผ ํจ์จ์ ์ผ๋ก ๋ค๋ฃจ๊ธฐ ์ํด sparseํ ๊ฒฐ๊ณผ๋ฅผ ์ป์ ์ ์๋ ํํฐ๋ค์ ์ฌ์ฉํจ์ผ๋ก์จ, ์์ถ ์ผ์ฑ ๊ฐ๋
์ ๋์
ํ ์ ์๋๋ก ํ์๋ค. ์ฆ, ์ค์ ๋ก๋ ์ ์ ์์ ํํฐ๋ง์ ์ฌ์ฉํด์ ์ ์ฒด ์ด๋ฏธ์ง์ ์ค์ํ ์ ๋ณด๋ฅผ ๋ชจ๋ ๋ด๊ณ ์ ํ๋ ๊ฒ์ด๋ค. ๋ฐ๋ผ์ CRPnet์ 50ร50 ํฝ์
์ด๋ฏธ์ง์์ 0.04ms ๋ง์ ๋์ ํ ์ ์์ ์ ๋๋ก ๋งค์ฐ ๋น ๋ฅด๋ฉฐ, ๋์์ ์ฑ๋ฅ ํ๋ฝ์ 2% ์ ๋๋ก ๋ฏธ๋ฏธํ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์ฌ์ฃผ์๋ค. ์ด๋ฅผ ๋ฐํ์ผ๋ก ํ ์ ์ฒด forest๋ GPU ์์ด 3.8ms ์์ ๋์ํ๋ฉฐ, ๋จธ๋ฆฌ์ ๋ชธํต ๋ฐฉํฅ ์ธก์ ์ ๋ํด ๋ค์ํ ๋ฐ์ดํฐ์
์์ ์ต๊ณ ์ ์ฑ๋ฅ์ ๋ณด์ฌ์ฃผ์๋ค. ๋ํ, ์ ํด์๋, ๋
ธ์ด์ฆ, ๊ฐ๋ ค์ง, ๋ธ๋ฌ ๋ฑ์ ๋ค์ํ ๊ฒฝ์ฐ์๋ ์ข์ ์ฑ๋ฅ์ ๋ณด์ฌ์ฃผ์๋ค.
๋ค์์ผ๋ก ๋ถ๋ถ-์ ์ฒด์ ์ฐ๊ด์ฑ์ ํตํ ์ด๋ฏธ์ง ์์ฑ ๋ฌธ์ ๋ฅผ ํ๊ตฌํ๋ค. ์
๋ ฅ ์ด๋ฏธ์ง ์์ ์ด๋ค ๋ฌผ์ฒด๋ฅผ ์ด๋ป๊ฒ ๋์ ๊ฒ์ธ์ง๋ฅผ ์ ์ถํ๋ ๊ฒ์ ์ปดํจํฐ ๋น์ ๊ณผ ๊ธฐ๊ณ ํ์ต์ ์
์ฅ์์ ์์ฃผ ํฅ๋ฏธ๋ก์ด ๋ฌธ์ ์ด๋ค. ์ด๋ ๋จผ์ , ๋ฌผ์ฒด์ ๋ง์คํฌ๋ฅผ ์ ์ ํ ํฌ๊ธฐ, ์์น, ๋ชจ์์ผ๋ก ๋ง๋ค๋ฉด์ ๋์์ ๊ทธ ๋ฌผ์ฒด๊ฐ ์
๋ ฅ ์ด๋ฏธ์ง ์์ ๋์ฌ์ก์ ๋์๋ ํฉ๋ฆฌ์ ์ผ๋ก ๋ณด์ผ ์ ์๋๋ก ํด์ผ ํ๋ค. ๊ทธ๋ ๊ฒ ๋๋ค๋ฉด, image editing ํน์ scene parsing ๋ฑ์ ๋ค์ํ ๋ฌธ์ ์ ์์ฉ ๋ ์ ์๋ค. ์ด ๋
ผ๋ฌธ์์๋, ์
๋ ฅ semantic map์ผ๋ก ๋ถํฐ ์๋ก์ด ๋ฌผ์ฒด๋ฅผ ์๋ง์ ๊ณณ์ ๋๋ ๋ฌธ์ ๋ฅผ end-to-end ๋ฐฉ์์ผ๋ก ํ์ต ๊ฐ๋ฅํ ๋ฅ ๋คํธ์ํฌ๋ฅผ ๊ตฌ์ฑํ๊ณ ์ ํ๋ค. ์ด๋ฅผ ์ํด, where ๋ชจ๋๊ณผ what ๋ชจ๋์ ๋ฐํ์ผ๋ก ํ๋ ๋คํธ์ํฌ๋ฅผ ๊ตฌ์ฑํ์์ผ๋ฉฐ, ๋ ๋ชจ๋์ spatial transformer network์ ํตํด ์ฐ๊ฒฐํ์ฌ ๋์์ ํ์ต์ด ๊ฐ๋ฅํ๋๋ก ํ์๋ค. ๋ํ, ๊ฐ๊ฐ์ ๋ชจ๋์ ์ง๋์ ํ์ต ๊ฒฝ๋ก์ ๋น์ง๋์ ํ์ต ๊ฒฝ๋ก๋ฅผ ๋ณ๋ ฌ์ ์ผ๋ก ๋ฐฐ์นํ์ฌ ๋์ผํ ์
๋ ฅ์ผ๋ก ๋ถํฐ ๋ค์ํ ๊ฒฐ๊ณผ๋ฅผ ์ป์ ์ ์๊ฒ ํ์๋ค. ์คํ์ ํตํด, ์ ์ํ ๋ฐฉ์์ด ์ฝ์
๋ ๋ฌผ์ฒด์ ์์น์ ๋ชจ์์ ๋ํ ๋ถํฌ๋ฅผ ๋์์ ํ์ต ํ ์ ์๊ณ , ๊ทธ ๋ถํฌ๋ก๋ถํฐ ์ค์ ์ ์ ์ฌํ ๋ฌผ์ฒด๋ฅผ ์๋ง์ ๊ณณ์ ๋์ ์ ์์์ ๋ณด์๋ค.
๋ง์ง๋ง์ผ๋ก ๊ณ ๋ คํ ๋ฌธ์ ๋, ์ปดํจํฐ ๋น์ ๋ถ์ผ์ ์๋ก์ด ๋ฌธ์ ๋ก์จ, ์์น ์ ๋ณด๊ฐ ์์ค ๋ ์ ์ ์์ ๋ถ๋ถ ํจ์น๋ค์ ๋ฐํ์ผ๋ก ์ ์ฒด ์ด๋ฏธ์ง๋ฅผ ๋ณต์ํ๋ ๊ฒ์ด๋ค. ์ด๊ฒ์ ์ด๋ฏธ์ง ์์ฑ๊ณผ ๋์์ ๊ฐ ํจ์น์ ์์น ์ ๋ณด๋ฅผ ์ถ์ธกํด์ผ ํ๊ธฐ์ ์ด๋ ค์ด ๋ฌธ์ ๊ฐ ๋๋ค. ์ฐ๋ฆฌ๋ ์ ๋์ ๋คํธ์ํฌ๋ฅผ ๋ฐํ์ผ๋ก ์ด ๋ฌธ์ ๋ฅผ ํด๊ฒฐํ๊ณ ์ ํ์๋ค. ์ฆ, ์์ฑ ๋คํธ์ํฌ๋ encoder-decoder ๋ฐฉ์์ ์ด์ฉํด์ ์ด๋ฏธ์ง์ ์์น ๋ง์คํฌ๋ฅผ ์ฐพ๊ณ ์ ํ๋ ๋ฐ๋ฉด์, ํ๋ณ ๋คํธ์ํฌ๋ ์์ฑ๋ ๊ฐ์ง ์ด๋ฏธ์ง๋ฅผ ์ฐพ์ผ๋ ค๊ณ ํ๋ค. ๊ทธ๋ฆฌ๊ณ ์ ์ฒด ๋คํธ์ํฌ๋ ์์น, ๊ฒ๋ณด๊ธฐ, ์ ๋์ ๊ฒฝ์์ ์ธ ๊ฐ์ง ๋ชฉ์ ํจ์๋ค๋ก ํ์ต์ด ๋๋ค. ์์น ๋ชฉ์ ํจ์๋ ์๋ง์ ์์น๋ฅผ ์์ธกํ๊ธฐ ์ํด ์ฌ์ฉ๋์๊ณ , ๊ฒ๋ณด๊ธฐ ๋ชฉ์ ํจ์๋ ์
๋ ฅ ํจ์น ๋ค์ด ๊ฒฐ๊ณผ ์ด๋ฏธ์ง ์์ ์ ์ ๋ณํ๋ง์ ๊ฐ์ง๊ณ ๋จ์์๋๋ก ํ๊ธฐ ์ํด ์ฌ์ฉ๋์์ผ๋ฉฐ, ์ ๋์ ๊ฒฝ์ ๋ชฉ์ ํจ์๋ ์์ฑ๋ ์ด๋ฏธ์ง๊ฐ ์ค์ ์ด๋ฏธ์ง์ ๋น์ทํ ์ ์๋๋ก ํ๊ธฐ ์ํด ์ ์ฉ๋์๋ค. ์ด๋ ๊ฒ ๊ตฌ์ฑ๋ ๋คํธ์ํฌ๋ ๋ณ๋์ annotation ์์ด ๊ธฐ์กด ๋ฐ์ดํฐ์
๋ค์ ๋ฐํ์ผ๋ก ํ์ต์ด ๊ฐ๋ฅํ ์ฅ์ ์ด ์๋ค. ๋ํ ์คํ์ ํตํด, ์ ์ํ ๋ฐฉ์์ด ๋ค์ํ ๋ฐ์ดํฐ์
์์ ์ ๋์ํจ์ ๋ณด์๋ค.1 Introduction 1
1.1 Organization of the Dissertation . . . . . . . . . . . . . . . . . . . 5
2 Related Work 9
2.1 Detection methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Orientation estimation methods . . . . . . . . . . . . . . . . . . . . 11
2.3 Instance synthesis methods . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Image generation methods . . . . . . . . . . . . . . . . . . . . . . . 15
3 Pedestrian detection 19
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.1 Determinantal Point Process Formulation . . . . . . . . . . 22
3.2.2 Quality Term . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.3 Individualness and Diversity Feature . . . . . . . . . . . . . 25
3.2.4 Mode Finding . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.2.5 Relationship to Quadratic Unconstrained Binary Optimization
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3.3.1 Experimental Settings . . . . . . . . . . . . . . . . . . . . . 36
3.3.2 Evaluation Results . . . . . . . . . . . . . . . . . . . . . . . 41
3.3.3 DET curves . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.3.4 Sensitivity analysis . . . . . . . . . . . . . . . . . . . . . . . 43
3.3.5 Effectiveness of the quality and similarity term design . . . 44
3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4 Head and body orientation estimation 51
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.2 Algorithmic Overview . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3 Rich Filter Bank . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
4.3.1 Compressed Filter Bank . . . . . . . . . . . . . . . . . . . . 57
4.3.2 Box Filter Bank . . . . . . . . . . . . . . . . . . . . . . . . 58
4.4 Convolutional Random Projection Net . . . . . . . . . . . . . . . . 58
4.4.1 Input Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
4.4.2 Convolutional and ReLU Layers . . . . . . . . . . . . . . . 60
4.4.3 Random Projection Layer . . . . . . . . . . . . . . . . . . . 61
4.4.4 Fully-Connected and Output Layers . . . . . . . . . . . . . 62
4.5 Convolutional Random Projection Forest . . . . . . . . . . . . . . 62
4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.6.1 Evaluation Datasets . . . . . . . . . . . . . . . . . . . . . . 65
4.6.2 CRPnet Characteristics . . . . . . . . . . . . . . . . . . . . 66
4.6.3 Head and Body Orientation Estimation . . . . . . . . . . . 67
4.6.4 Analysis of the Proposed Algorithm . . . . . . . . . . . . . 87
4.6.5 Classification Examples . . . . . . . . . . . . . . . . . . . . 87
4.6.6 Regression Examples . . . . . . . . . . . . . . . . . . . . . . 100
4.6.7 Experiments on the Original Datasets . . . . . . . . . . . . 100
4.6.8 Dataset Corrections . . . . . . . . . . . . . . . . . . . . . . 100
4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
5 Instance synthesis and placement 109
5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.2.1 The where module: learning a spatial distribution of object
instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
5.2.2 The what module: learning a shape distribution of object
instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
5.2.3 The complete pipeline . . . . . . . . . . . . . . . . . . . . . 120
5.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 121
5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
6 Image generation 129
6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
6.2 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 134
6.2.1 Key Part Detection . . . . . . . . . . . . . . . . . . . . . . 135
6.2.2 Part Encoding Network . . . . . . . . . . . . . . . . . . . . 135
6.2.3 Mask Prediction Network . . . . . . . . . . . . . . . . . . . 137
6.2.4 Image Generation Network . . . . . . . . . . . . . . . . . . 138
6.2.5 Real-Fake Discriminator Network . . . . . . . . . . . . . . . 139
6.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
6.3.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
6.3.2 Image Generation Results . . . . . . . . . . . . . . . . . . . 142
6.3.3 Experimental Details . . . . . . . . . . . . . . . . . . . . . . 150
6.3.4 Image Generation from Local Patches . . . . . . . . . . . . 150
6.3.5 Part Combination . . . . . . . . . . . . . . . . . . . . . . . 150
6.3.6 Unsupervised Feature Learning . . . . . . . . . . . . . . . . 151
6.3.7 An Alternative Objective Function . . . . . . . . . . . . . . 151
6.3.8 An Alternative Network Structure . . . . . . . . . . . . . . 151
6.3.9 Different Number of Input Patches . . . . . . . . . . . . . . 152
6.3.10 Smaller Size of Input Patches . . . . . . . . . . . . . . . . . 153
6.3.11 Degraded Input Patches . . . . . . . . . . . . . . . . . . . . 153
6.3.12 User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
6.3.13 Failure cases . . . . . . . . . . . . . . . . . . . . . . . . . . 155
6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
7 Conclusion and Future Work 179Docto
Recommended from our members
End to End Learning in Autonomous Driving Systems
Convolutional neural networks have advanced visual perception significantly in recent years. Two major ingredients that enable such a success are the composition of simple modules into a complex network and the end to end optimization. However, such success has not yet revolutionized robotics as much as vision, even if robotics suffer from similar problems as traditional computer vision, i.e. imperfectness of the manual pipeline design of the system. This thesis investigates using end-to-end learning for the autonomous driving system, a concrete robotic application. End to end learning can produce reasonable driving behaviors, even in the complex urban driving scenarios. Representation learning in end-to-end driving models is crucial, and auxiliary vision tasks such as semantic segmentation can help to form a more informative driving representation especially when training data is limited. Naive convolutional neural networks are usually only capable of doing reactive control and can not involve complex reasoning in a particular scenario. This thesis also studies how to handle scene conditioned driving behavior, which goes beyond the capability of reactive control. Alongside the end-to-end structure, learning methods also play a critical role. Imitation learning methods will acquire meaningful behaviors but usually, the robot can not master the skill. Reinforcement learning, on the contrary, either barely learns anything if the environment is too complex, or it can master the skill otherwise. To get the best of both worlds, this thesis proposes an algorithmically unified method to learn from both demonstration data and the environment
- โฆ