206 research outputs found

    ๋ถ€๋ถ„ ์ •๋ณด๋ฅผ ์ด์šฉํ•œ ์‹œ๊ฐ ๋ฐ์ดํ„ฐ์˜ ๊ตฌ์กฐํ™” ๋œ ์ดํ•ด: ํฌ์†Œ์„ฑ, ๋ฌด์ž‘์œ„์„ฑ, ์—ฐ๊ด€์„ฑ, ๊ทธ๋ฆฌ๊ณ  ๋”ฅ ๋„คํŠธ์›Œํฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2019. 2. Oh, Songhwai.For a deeper understanding of visual data, a relationship between local parts and a global scene has to be carefully examined. Examples of such relationships related to vision problems include but not limited to detecting a region of interest in the scene, classifying an image based on limited visual cues, and synthesizing new images conditioned on the local or global inputs. In this thesis, we aim to learn the relationship and demonstrate its importance by showing that it is one of critical keys to address four challenging vision problems mentioned above. For each problem, we construct deep neural networks that suit for each task. The first problem considered in the thesis is object detection. It requires not only finding local patches that look like target objects conditioned on the context of input scene but also comparing local patches themselves to assign a single detection for each object. To this end, we introduce individualness of detection candidates as a complement to objectness for object detection. The individualness assigns a single detection for each object out of raw detection candidates given by either object proposals or sliding windows. We show that conventional approaches, such as non-maximum suppression, are sub-optimal since they suppress nearby detections using only detection scores. We use a determinantal point process combined with the individualness to optimally select final detections. It models each detection using its quality and similarity to other detections based on the individualness. Then, detections with high detection scores and low correlations are selected by measuring their probability using a determinant of a matrix, which is composed of quality terms on the diagonal entries and similarities on the off-diagonal entries. For concreteness, we focus on the pedestrian detection problem as it is one of the most challenging problems due to frequent occlusions and unpredictable human motions. Experimental results demonstrate that the proposed algorithm works favorably against existing methods, including non-maximal suppression and a quadratic unconstrained binary optimization based method. For a second problem, we classify images based on observations of local patches. More specifically, we consider the problem of estimating the head pose and body orientation of a person from a low-resolution image. Under this setting, it is difficult to reliably extract facial features or detect body parts. We propose a convolutional random projection forest (CRPforest) algorithm for these tasks. A convolutional random projection network (CRPnet) is used at each node of the forest. It maps an input image to a high-dimensional feature space using a rich filter bank. The filter bank is designed to generate sparse responses so that they can be efficiently computed by compressive sensing. A sparse random projection matrix can capture most essential information contained in the filter bank without using all the filters in it. Therefore, the CRPnet is fast, e.g., it requires 0.04ms to process an image of 50ร—50 pixels, due to the small number of convolutions (e.g., 0.01% of a layer of a neural network) at the expense of less than 2% accuracy. The overall forest estimates head and body pose well on benchmark datasets, e.g., over 98% on the HIIT dataset, while requiring at 3.8ms without using a GPU. Extensive experiments on challenging datasets show that the proposed algorithm performs favorably against the state-of-the-art methods in low-resolution images with noise, occlusion, and motion blur. Then, we shift our attention to image synthesis based on the local-global relationship. Learning how to synthesize and place object instances into an image (semantic map) based on the scene context is a challenging and interesting problem in vision and learning. On one hand, solving this problem requires a joint decision of (a) generating an object mask from a certain class at a plausible scale, location, and shape, and (b) inserting the object instance mask into an existing scene so that the synthesized content is semantically realistic. On the other hand, such a model can synthesize realistic outputs to potentially facilitate numerous image editing and scene parsing tasks. In this paper, we propose an end-to-end trainable neural network that can synthesize and insert object instances into an image via a semantic map. The proposed network contains two generative modules that determine where the inserted object should be (i.e., location and scale) and what the object shape (and pose) should look like. The two modules are connected together with a spatial transformation network and jointly trained and optimized in a purely data-driven way. Specifically, we propose a novel network architecture with parallel supervised and unsupervised paths to guarantee diverse results. We show that the proposed network architecture learns the context-aware distribution of the location and shape of object instances to be inserted, and it can generate realistic and statistically meaningful object instances that simultaneously address the where and what sub-problems. As the final topic of the thesis, we introduce a new vision problem: generating an image based on a small number of key local patches without any geometric prior. In this work, key local patches are defined as informative regions of the target object or scene. This is a challenging problem since it requires generating realistic images and predicting locations of parts at the same time. We construct adversarial networks to tackle this problem. A generator network generates a fake image as well as a mask based on the encoder-decoder framework. On the other hand, a discriminator network aims to detect fake images. The network is trained with three losses to consider spatial, appearance, and adversarial information. The spatial loss determines whether the locations of predicted parts are correct. Input patches are restored in the output image without much modification due to the appearance loss. The adversarial loss ensures output images are realistic. The proposed network is trained without supervisory signals since no labels of key parts are required. Experimental results on seven datasets demonstrate that the proposed algorithm performs favorably on challenging objects and scenes.์‹œ๊ฐ ๋ฐ์ดํ„ฐ๋ฅผ ์‹ฌ๋„ ๊นŠ๊ฒŒ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์ „์ฒด ์˜์—ญ๊ณผ ๋ถ€๋ถ„ ์˜์—ญ๋“ค ๊ฐ„์˜ ์—ฐ๊ด€์„ฑ ํ˜น์€ ์ƒํ˜ธ ์ž‘์šฉ์„ ์ฃผ์˜ ๊นŠ๊ฒŒ ๋ถ„์„ํ•˜๋Š” ๊ฒƒ์ด ํ•„์š”ํ•˜๋‹ค. ์ด์— ๊ด€๋ จ๋œ ์ปดํ“จํ„ฐ ๋น„์ „ ๋ฌธ์ œ๋กœ๋Š” ์ด๋ฏธ์ง€์—์„œ ์›ํ•˜๋Š” ๋ถ€๋ถ„์„ ๊ฒ€์ถœํ•œ๋‹ค๋˜์ง€, ์ œํ•œ๋œ ๋ถ€๋ถ„์ ์ธ ์ •๋ณด๋งŒ์œผ๋กœ ์ „์ฒด ์ด๋ฏธ์ง€๋ฅผ ํŒ๋ณ„ ํ•˜๊ฑฐ๋‚˜, ํ˜น์€ ์ฃผ์–ด์ง„ ์ •๋ณด๋กœ๋ถ€ํ„ฐ ์›ํ•˜๋Š” ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•˜๋Š” ๋“ฑ์ด ์žˆ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š”, ๊ทธ ์—ฐ๊ด€์„ฑ์„ ํ•™์Šตํ•˜๋Š” ๊ฒƒ์ด ์•ž์„œ ์–ธ๊ธ‰๋œ ๋‹ค์–‘ํ•œ ๋ฌธ์ œ๋“ค์„ ํ‘ธ๋Š”๋ฐ ์ค‘์š”ํ•œ ์—ด์‡ ๊ฐ€ ๋œ๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์—ฌ์ฃผ๊ณ ์ž ํ•œ๋‹ค. ์ด์— ๋”ํ•ด์„œ, ๊ฐ๊ฐ์˜ ๋ฌธ์ œ์— ์•Œ๋งž๋Š” ๋”ฅ ๋„คํŠธ์›Œํฌ์˜ ๋””์ž์ธ ๋˜ํ•œ ํ† ์˜ํ•˜๊ณ ์ž ํ•œ๋‹ค. ์ฒซ ์ฃผ์ œ๋กœ, ๋ฌผ์ฒด ๊ฒ€์ถœ ๋ฐฉ์‹์— ๋Œ€ํ•ด ๋ถ„์„ํ•˜๊ณ ์ž ํ•œ๋‹ค. ์ด ๋ฌธ์ œ๋Š” ํƒ€๊ฒŸ ๋ฌผ์ฒด์™€ ๋น„์Šทํ•˜๊ฒŒ ์ƒ๊ธด ์˜์—ญ์„ ์ฐพ์•„์•ผ ํ•  ๋ฟ ์•„๋‹ˆ๋ผ, ์ฐพ์•„์ง„ ์˜์—ญ๋“ค ์‚ฌ์ด์— ์—ฐ๊ด€์„ฑ์„ ๋ถ„์„ํ•จ์œผ๋กœ์จ ๊ฐ ๋ฌผ์ฒด ๋งˆ๋‹ค ๋‹จ ํ•˜๋‚˜์˜ ๊ฒ€์ถœ ๊ฒฐ๊ณผ๋ฅผ ํ• ๋‹น์‹œ์ผœ์•ผ ํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด, ์šฐ๋ฆฌ๋Š” objectness์— ๋Œ€ํ•œ ๋ณด์™„์œผ๋กœ์จ individualness๋ผ๋Š” ๊ฐœ๋…์„ ์ œ์•ˆ ํ•˜์˜€๋‹ค. ์ด๋Š” ์ž„์˜์˜ ๋ฐฉ์‹์œผ๋กœ ์–ป์–ด์ง„ ํ›„๋ณด ๋ฌผ์ฒด ์˜์—ญ ์ค‘ ํ•˜๋‚˜์”ฉ์„ ๋ฌผ์ฒด ๋งˆ๋‹ค ํ• ๋‹นํ•˜๋Š”๋ฐ ์“ฐ์ด๋Š”๋ฐ, ์ด๊ฒƒ์€ ๊ฒ€์ถœ ์Šค์ฝ”์–ด๋งŒ์„ ๋ฐ”ํƒ•์œผ๋กœ ํ›„์ฒ˜๋ฆฌ๋ฅผ ํ•˜๋Š” ๊ธฐ์กด์˜ non-maximum suppression ๋“ฑ์˜ ๋ฐฉ์‹์ด sub-optimal ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ๋ฐ–์— ์—†๊ธฐ ๋•Œ๋ฌธ์— ์ด๋ฅผ ๊ฐœ์„ ํ•˜๊ณ ์ž ๋„์ž…ํ•˜์˜€๋‹ค. ์šฐ๋ฆฌ๋Š” ํ›„๋ณด ๋ฌผ์ฒด ์˜์—ญ์œผ๋กœ๋ถ€ํ„ฐ ์ตœ์ ์˜ ์˜์—ญ๋“ค์„ ์„ ํƒํ•˜๊ธฐ ์œ„ํ•ด์„œ, determinantal point process๋ผ๋Š” random process์˜ ์ผ์ข…์„ ์‚ฌ์šฉํ•˜์˜€๋‹ค. ์ด๊ฒƒ์€ ๋จผ์ € ๊ฐ๊ฐ์˜ ๊ฒ€์ถœ ๊ฒฐ๊ณผ๋ฅผ ๊ทธ๊ฒƒ์˜ quality(๊ฒ€์ถœ ์Šค์ฝ”์–ด)์™€ ๋‹ค๋ฅธ ๊ฒ€์ถœ ๊ฒฐ๊ณผ๋“ค ์‚ฌ์ด์— individualness๋ฅผ ๋ฐ”ํƒ•์œผ ๋กœ ๊ณ„์‚ฐ๋œ similarity(์ƒ๊ด€ ๊ด€๊ณ„)๋ฅผ ์ด์šฉํ•ด ๋ชจ๋ธ๋ง ํ•œ๋‹ค. ๊ทธ ํ›„, ๊ฐ๊ฐ์˜ ๊ฒ€์ถœ ๊ฒฐ๊ณผ๊ฐ€ ์„ ํƒ๋  ํ™•๋ฅ ์„ quality์™€ similarity์— ๊ธฐ๋ฐ˜ํ•œ ์ปค๋„์˜ determinant๋กœ ํ‘œํ˜„ํ•œ๋‹ค. ๊ทธ ์ปค๋„์— diagonal ๋ถ€๋ถ„์—๋Š” quality๊ฐ€ ๋“ค์–ด๊ฐ€๊ณ , off-diagonal์—๋Š” similarity๊ฐ€ ๋Œ€์ž… ๋œ๋‹ค. ๋”ฐ๋ผ์„œ, ์–ด๋–ค ๊ฒ€์ถœ ํ›„๋ณด๊ฐ€ ์ตœ์ข… ๊ฒ€์ถœ ๊ฒฐ๊ณผ๋กœ ์„ ํƒ๋  ํ™•๋ฅ ์ด ๋†’์•„์ง€๊ธฐ ์œ„ํ•ด์„œ๋Š”, ๋†’์€ quality๋ฅผ ๊ฐ€์ง๊ณผ ๋™์‹œ์— ๋‹ค๋ฅธ ๊ฒ€์ถœ ๊ฒฐ๊ณผ๋“ค๊ณผ ๋‚ฎ์€ similarity๋ฅผ ๊ฐ€์ ธ์•ผ ํ•œ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ๋ณดํ–‰์ž ๊ฒ€์ถœ์— ์ง‘์ค‘ํ•˜์˜€๋Š”๋ฐ, ์ด๋Š” ๋ณดํ–‰์ž ๊ฒ€์ถœ์ด ์ค‘์š”ํ•œ ๋ฌธ์ œ์ด๋ฉด์„œ๋„, ๋‹ค๋ฅธ ๋ฌผ์ฒด๋“ค์— ๋น„ํ•ด ์ž์ฃผ ๊ฐ€๋ ค์ง€๊ณ  ๋‹ค์–‘ํ•œ ์›€์ง์ž„์„ ๋ณด์ด๋Š” ๊ฒ€์ถœ์ด ์–ด๋ ค์šด ๋ฌผ์ฒด์ด๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ์ œ์•ˆํ•œ ๋ฐฉ๋ฒ•์ด non-maximum suppression ํ˜น์€ quadratic unconstrained binary optimization ๋ฐฉ์‹๋“ค ๋ณด๋‹ค ์šฐ์ˆ˜ํ•จ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋‹ค์Œ ๋ฌธ์ œ๋กœ๋Š”, ๋ถ€๋ถ„ ์ •๋ณด๋ฅผ ์ด์šฉํ•ด์„œ ์ „์ฒด ์ด๋ฏธ์ง€๋ฅผ classifyํ•˜๋Š” ๊ฒƒ์„ ๊ณ ๋ คํ•œ๋‹ค. ๋‹ค์–‘ํ•œ classification ๋ฌธ์ œ ์ค‘์—, ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ์ €ํ•ด์ƒ๋„ ์ด๋ฏธ์ง€๋กœ๋ถ€ํ„ฐ ์‚ฌ๋žŒ์˜ ๋จธ๋ฆฌ์™€ ๋ชธ์ด ํ–ฅํ•˜๋Š” ๋ฐฉํ–ฅ์„ ์•Œ์•„๋‚ด๋Š” ๋ฌธ์ œ์— ์ง‘์ค‘ํ•˜์˜€๋‹ค. ์ด ๊ฒฝ์šฐ์—๋Š”, ๋ˆˆ, ์ฝ”, ์ž… ๋“ฑ์„ ์ฐพ๊ฑฐ๋‚˜, ๋ชธ์˜ ํŒŒํŠธ๋ฅผ ์ •ํ™•ํžˆ ์•Œ์•„๋‚ด๋Š” ๊ฒƒ์ด ์–ด๋ ต๋‹ค. ์ด๋ฅผ ์œ„ํ•ด, ์šฐ๋ฆฌ๋Š” convolutional random projection forest (CRPforest)๋ผ๋Š” ๋ฐฉ์‹์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด forest์— ๊ฐ๊ฐ์˜ node ์•ˆ์—๋Š” convolutional random projection network (CRPnet)์ด ๋“ค์–ด์žˆ๋Š”๋ฐ, ์ด๋Š” ๋‹ค์–‘ํ•œ ํ•„ํ„ฐ๋ฅผ ์ด์šฉํ•ด์„œ ์ธํ’‹ ์ด๋ฏธ์ง€๋ฅผ ๋†’์€ ์ฐจ์›์œผ๋กœ mapping ํ•œ๋‹ค. ์ด๋ฅผ ํšจ์œจ์ ์œผ๋กœ ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด sparseํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๋Š” ํ•„ํ„ฐ๋“ค์„ ์‚ฌ์šฉํ•จ์œผ๋กœ์จ, ์••์ถ• ์„ผ์‹ฑ ๊ฐœ๋…์„ ๋„์ž… ํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜์˜€๋‹ค. ์ฆ‰, ์‹ค์ œ๋กœ๋Š” ์ ์€ ์ˆ˜์˜ ํ•„ํ„ฐ๋งŒ์„ ์‚ฌ์šฉํ•ด์„œ ์ „์ฒด ์ด๋ฏธ์ง€์˜ ์ค‘์š”ํ•œ ์ •๋ณด๋ฅผ ๋ชจ๋‘ ๋‹ด๊ณ ์ž ํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ๋”ฐ๋ผ์„œ CRPnet์€ 50ร—50 ํ”ฝ์…€ ์ด๋ฏธ์ง€์—์„œ 0.04ms ๋งŒ์— ๋™์ž‘ ํ•  ์ˆ˜ ์žˆ์„ ์ •๋„๋กœ ๋งค์šฐ ๋น ๋ฅด๋ฉฐ, ๋™์‹œ์— ์„ฑ๋Šฅ ํ•˜๋ฝ์€ 2% ์ •๋„๋กœ ๋ฏธ๋ฏธํ•œ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ์ด๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ํ•œ ์ „์ฒด forest๋Š” GPU ์—†์ด 3.8ms ์•ˆ์— ๋™์ž‘ํ•˜๋ฉฐ, ๋จธ๋ฆฌ์™€ ๋ชธํ†ต ๋ฐฉํ–ฅ ์ธก์ •์— ๋Œ€ํ•ด ๋‹ค์–‘ํ•œ ๋ฐ์ดํ„ฐ์…‹์—์„œ ์ตœ๊ณ ์˜ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋˜ํ•œ, ์ €ํ•ด์ƒ๋„, ๋…ธ์ด์ฆˆ, ๊ฐ€๋ ค์ง, ๋ธ”๋Ÿฌ ๋“ฑ์˜ ๋‹ค์–‘ํ•œ ๊ฒฝ์šฐ์—๋„ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ ๋ถ€๋ถ„-์ „์ฒด์˜ ์—ฐ๊ด€์„ฑ์„ ํ†ตํ•œ ์ด๋ฏธ์ง€ ์ƒ์„ฑ ๋ฌธ์ œ๋ฅผ ํƒ๊ตฌํ•œ๋‹ค. ์ž…๋ ฅ ์ด๋ฏธ์ง€ ์ƒ์— ์–ด๋–ค ๋ฌผ์ฒด๋ฅผ ์–ด๋–ป๊ฒŒ ๋†“์„ ๊ฒƒ์ธ์ง€๋ฅผ ์œ ์ถ”ํ•˜๋Š” ๊ฒƒ์€ ์ปดํ“จํ„ฐ ๋น„์ „๊ณผ ๊ธฐ๊ณ„ ํ•™์Šต์˜ ์ž…์žฅ์—์„œ ์•„์ฃผ ํฅ๋ฏธ๋กœ์šด ๋ฌธ์ œ์ด๋‹ค. ์ด๋Š” ๋จผ์ €, ๋ฌผ์ฒด์˜ ๋งˆ์Šคํฌ๋ฅผ ์ ์ ˆํ•œ ํฌ๊ธฐ, ์œ„์น˜, ๋ชจ์–‘์œผ๋กœ ๋งŒ๋“ค๋ฉด์„œ ๋™์‹œ์— ๊ทธ ๋ฌผ์ฒด๊ฐ€ ์ž…๋ ฅ ์ด๋ฏธ์ง€ ์ƒ์— ๋†“์—ฌ์กŒ์„ ๋•Œ์—๋„ ํ•ฉ๋ฆฌ์ ์œผ๋กœ ๋ณด์ผ ์ˆ˜ ์žˆ๋„๋ก ํ•ด์•ผ ํ•œ๋‹ค. ๊ทธ๋ ‡๊ฒŒ ๋œ๋‹ค๋ฉด, image editing ํ˜น์€ scene parsing ๋“ฑ์˜ ๋‹ค์–‘ํ•œ ๋ฌธ์ œ์— ์‘์šฉ ๋  ์ˆ˜ ์žˆ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š”, ์ž…๋ ฅ semantic map์œผ๋กœ ๋ถ€ํ„ฐ ์ƒˆ๋กœ์šด ๋ฌผ์ฒด๋ฅผ ์•Œ๋งž์€ ๊ณณ์— ๋†“๋Š” ๋ฌธ์ œ๋ฅผ end-to-end ๋ฐฉ์‹์œผ๋กœ ํ•™์Šต ๊ฐ€๋Šฅํ•œ ๋”ฅ ๋„คํŠธ์›Œํฌ๋ฅผ ๊ตฌ์„ฑํ•˜๊ณ ์ž ํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด, where ๋ชจ๋“ˆ๊ณผ what ๋ชจ๋“ˆ์„ ๋ฐ”ํƒ•์œผ๋กœ ํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋ฅผ ๊ตฌ์„ฑํ•˜์˜€์œผ๋ฉฐ, ๋‘ ๋ชจ๋“ˆ์„ spatial transformer network์„ ํ†ตํ•ด ์—ฐ๊ฒฐํ•˜์—ฌ ๋™์‹œ์— ํ•™์Šต์ด ๊ฐ€๋Šฅํ•˜๋„๋ก ํ•˜์˜€๋‹ค. ๋˜ํ•œ, ๊ฐ๊ฐ์˜ ๋ชจ๋“ˆ์— ์ง€๋„์  ํ•™์Šต ๊ฒฝ๋กœ์™€ ๋น„์ง€๋„์  ํ•™์Šต ๊ฒฝ๋กœ๋ฅผ ๋ณ‘๋ ฌ์ ์œผ๋กœ ๋ฐฐ์น˜ํ•˜์—ฌ ๋™์ผํ•œ ์ž…๋ ฅ์œผ๋กœ ๋ถ€ํ„ฐ ๋‹ค์–‘ํ•œ ๊ฒฐ๊ณผ๋ฅผ ์–ป์„ ์ˆ˜ ์žˆ๊ฒŒ ํ•˜์˜€๋‹ค. ์‹คํ—˜์„ ํ†ตํ•ด, ์ œ์•ˆํ•œ ๋ฐฉ์‹์ด ์‚ฝ์ž…๋  ๋ฌผ์ฒด์˜ ์œ„์น˜์™€ ๋ชจ์–‘์— ๋Œ€ํ•œ ๋ถ„ํฌ๋ฅผ ๋™์‹œ์— ํ•™์Šต ํ•  ์ˆ˜ ์žˆ๊ณ , ๊ทธ ๋ถ„ํฌ๋กœ๋ถ€ํ„ฐ ์‹ค์ œ์™€ ์œ ์‚ฌํ•œ ๋ฌผ์ฒด๋ฅผ ์•Œ๋งž์€ ๊ณณ์— ๋†“์„ ์ˆ˜ ์žˆ์Œ์„ ๋ณด์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๊ณ ๋ คํ•  ๋ฌธ์ œ๋Š”, ์ปดํ“จํ„ฐ ๋น„์ „ ๋ถ„์•ผ์— ์ƒˆ๋กœ์šด ๋ฌธ์ œ๋กœ์จ, ์œ„์น˜ ์ •๋ณด๊ฐ€ ์ƒ์‹ค ๋œ ์ ์€ ์ˆ˜์˜ ๋ถ€๋ถ„ ํŒจ์น˜๋“ค์„ ๋ฐ”ํƒ•์œผ๋กœ ์ „์ฒด ์ด๋ฏธ์ง€๋ฅผ ๋ณต์›ํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ์ด๊ฒƒ์€ ์ด๋ฏธ์ง€ ์ƒ์„ฑ๊ณผ ๋™์‹œ์— ๊ฐ ํŒจ์น˜์˜ ์œ„์น˜ ์ •๋ณด๋ฅผ ์ถ”์ธกํ•ด์•ผ ํ•˜๊ธฐ์— ์–ด๋ ค์šด ๋ฌธ์ œ๊ฐ€ ๋œ๋‹ค. ์šฐ๋ฆฌ๋Š” ์ ๋Œ€์  ๋„คํŠธ์›Œํฌ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ์ฆ‰, ์ƒ์„ฑ ๋„คํŠธ์›Œํฌ๋Š” encoder-decoder ๋ฐฉ์‹์„ ์ด์šฉํ•ด์„œ ์ด๋ฏธ์ง€์™€ ์œ„์น˜ ๋งˆ์Šคํฌ๋ฅผ ์ฐพ๊ณ ์ž ํ•˜๋Š” ๋ฐ˜๋ฉด์—, ํŒ๋ณ„ ๋„คํŠธ์›Œํฌ๋Š” ์ƒ์„ฑ๋œ ๊ฐ€์งœ ์ด๋ฏธ์ง€๋ฅผ ์ฐพ์œผ๋ ค๊ณ  ํ•œ๋‹ค. ๊ทธ๋ฆฌ๊ณ  ์ „์ฒด ๋„คํŠธ์›Œํฌ๋Š” ์œ„์น˜, ๊ฒ‰๋ณด๊ธฐ, ์ ๋Œ€์  ๊ฒฝ์Ÿ์˜ ์„ธ ๊ฐ€์ง€ ๋ชฉ์  ํ•จ์ˆ˜๋“ค๋กœ ํ•™์Šต์ด ๋œ๋‹ค. ์œ„์น˜ ๋ชฉ์  ํ•จ์ˆ˜๋Š” ์•Œ๋งž์€ ์œ„์น˜๋ฅผ ์˜ˆ์ธกํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋˜์—ˆ๊ณ , ๊ฒ‰๋ณด๊ธฐ ๋ชฉ์  ํ•จ์ˆ˜๋Š” ์ž…๋ ฅ ํŒจ์น˜ ๋“ค์ด ๊ฒฐ๊ณผ ์ด๋ฏธ์ง€ ์ƒ์— ์ ์€ ๋ณ€ํ™”๋งŒ์„ ๊ฐ€์ง€๊ณ  ๋‚จ์•„์žˆ๋„๋ก ํ•˜๊ธฐ ์œ„ํ•ด ์‚ฌ์šฉ๋˜์—ˆ์œผ๋ฉฐ, ์ ๋Œ€์  ๊ฒฝ์Ÿ ๋ชฉ์  ํ•จ์ˆ˜๋Š” ์ƒ์„ฑ๋œ ์ด๋ฏธ์ง€๊ฐ€ ์‹ค์ œ ์ด๋ฏธ์ง€์™€ ๋น„์Šทํ•  ์ˆ˜ ์žˆ๋„๋ก ํ•˜๊ธฐ ์œ„ํ•ด ์ ์šฉ๋˜์—ˆ๋‹ค. ์ด๋ ‡๊ฒŒ ๊ตฌ์„ฑ๋œ ๋„คํŠธ์›Œํฌ๋Š” ๋ณ„๋„์˜ annotation ์—†์ด ๊ธฐ์กด ๋ฐ์ดํ„ฐ์…‹ ๋“ค์„ ๋ฐ”ํƒ•์œผ๋กœ ํ•™์Šต์ด ๊ฐ€๋Šฅํ•œ ์žฅ์ ์ด ์žˆ๋‹ค. ๋˜ํ•œ ์‹คํ—˜์„ ํ†ตํ•ด, ์ œ์•ˆํ•œ ๋ฐฉ์‹์ด ๋‹ค์–‘ํ•œ ๋ฐ์ดํ„ฐ์…‹์—์„œ ์ž˜ ๋™์ž‘ํ•จ์„ ๋ณด์˜€๋‹ค.1 Introduction 1 1.1 Organization of the Dissertation . . . . . . . . . . . . . . . . . . . 5 2 Related Work 9 2.1 Detection methods . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.2 Orientation estimation methods . . . . . . . . . . . . . . . . . . . . 11 2.3 Instance synthesis methods . . . . . . . . . . . . . . . . . . . . . . 13 2.4 Image generation methods . . . . . . . . . . . . . . . . . . . . . . . 15 3 Pedestrian detection 19 3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.2 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2.1 Determinantal Point Process Formulation . . . . . . . . . . 22 3.2.2 Quality Term . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.3 Individualness and Diversity Feature . . . . . . . . . . . . . 25 3.2.4 Mode Finding . . . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2.5 Relationship to Quadratic Unconstrained Binary Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3.3.1 Experimental Settings . . . . . . . . . . . . . . . . . . . . . 36 3.3.2 Evaluation Results . . . . . . . . . . . . . . . . . . . . . . . 41 3.3.3 DET curves . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 3.3.4 Sensitivity analysis . . . . . . . . . . . . . . . . . . . . . . . 43 3.3.5 Effectiveness of the quality and similarity term design . . . 44 3.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 4 Head and body orientation estimation 51 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 4.2 Algorithmic Overview . . . . . . . . . . . . . . . . . . . . . . . . . 54 4.3 Rich Filter Bank . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 4.3.1 Compressed Filter Bank . . . . . . . . . . . . . . . . . . . . 57 4.3.2 Box Filter Bank . . . . . . . . . . . . . . . . . . . . . . . . 58 4.4 Convolutional Random Projection Net . . . . . . . . . . . . . . . . 58 4.4.1 Input Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 4.4.2 Convolutional and ReLU Layers . . . . . . . . . . . . . . . 60 4.4.3 Random Projection Layer . . . . . . . . . . . . . . . . . . . 61 4.4.4 Fully-Connected and Output Layers . . . . . . . . . . . . . 62 4.5 Convolutional Random Projection Forest . . . . . . . . . . . . . . 62 4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 64 4.6.1 Evaluation Datasets . . . . . . . . . . . . . . . . . . . . . . 65 4.6.2 CRPnet Characteristics . . . . . . . . . . . . . . . . . . . . 66 4.6.3 Head and Body Orientation Estimation . . . . . . . . . . . 67 4.6.4 Analysis of the Proposed Algorithm . . . . . . . . . . . . . 87 4.6.5 Classification Examples . . . . . . . . . . . . . . . . . . . . 87 4.6.6 Regression Examples . . . . . . . . . . . . . . . . . . . . . . 100 4.6.7 Experiments on the Original Datasets . . . . . . . . . . . . 100 4.6.8 Dataset Corrections . . . . . . . . . . . . . . . . . . . . . . 100 4.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 5 Instance synthesis and placement 109 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 5.2.1 The where module: learning a spatial distribution of object instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 5.2.2 The what module: learning a shape distribution of object instances . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 5.2.3 The complete pipeline . . . . . . . . . . . . . . . . . . . . . 120 5.3 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 121 5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 6 Image generation 129 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.2 Proposed Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 134 6.2.1 Key Part Detection . . . . . . . . . . . . . . . . . . . . . . 135 6.2.2 Part Encoding Network . . . . . . . . . . . . . . . . . . . . 135 6.2.3 Mask Prediction Network . . . . . . . . . . . . . . . . . . . 137 6.2.4 Image Generation Network . . . . . . . . . . . . . . . . . . 138 6.2.5 Real-Fake Discriminator Network . . . . . . . . . . . . . . . 139 6.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 6.3.1 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 6.3.2 Image Generation Results . . . . . . . . . . . . . . . . . . . 142 6.3.3 Experimental Details . . . . . . . . . . . . . . . . . . . . . . 150 6.3.4 Image Generation from Local Patches . . . . . . . . . . . . 150 6.3.5 Part Combination . . . . . . . . . . . . . . . . . . . . . . . 150 6.3.6 Unsupervised Feature Learning . . . . . . . . . . . . . . . . 151 6.3.7 An Alternative Objective Function . . . . . . . . . . . . . . 151 6.3.8 An Alternative Network Structure . . . . . . . . . . . . . . 151 6.3.9 Different Number of Input Patches . . . . . . . . . . . . . . 152 6.3.10 Smaller Size of Input Patches . . . . . . . . . . . . . . . . . 153 6.3.11 Degraded Input Patches . . . . . . . . . . . . . . . . . . . . 153 6.3.12 User Study . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 6.3.13 Failure cases . . . . . . . . . . . . . . . . . . . . . . . . . . 155 6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178 7 Conclusion and Future Work 179Docto

    A Study on the Performance Evaluation of OpenGL-based Graphics Sub-System

    Get PDF
    The company, SGI, developed the new 3D API, OpenGL and opened the specification of OpenGL to the public. Many other companies which try to develop the graphic hardware system are maintaining and advancing the products related OpenGL with OpenGL specification. The softwares which are accelerated by OpenGL show various results according to the detail function in the accelerating device, OpenGL. However, there is a loss in the performance of the system hardware because users are not fully prepared for understanding the graphic hardwares and 3D API. In this thesis, 3D performance environmental settings supporting the OpenGL 3D API Graphic Sub-system were tested with SPECviewperf 7.1 based on Pro/Engineer 2001 which includes two models in shaded, wireframe, and hidden-line removal, three modes and seven viewsets. Finally, this research aims for organizing the optimal environment of major application which used by users with the result of 3D performance analysis.1. ์„œ ๋ก  1 1.1 ์—ฐ๊ตฌ๋ฐฐ๊ฒฝ 1 1.2 ์—ฐ๊ตฌ๋‚ด์šฉ 1 1.3 ์—ฐ๊ตฌ๋ชฉ์  2 2. OpenGL 3D API ์„ฑ๋Šฅ ํ‰๊ฐ€ ํ™˜๊ฒฝ ์„ค์ • 3 2.1 Graphics Sub-system 3 2.2 OpenGL 3D API 4 2.3 Specviewperf 6 2.4 ํ…Œ์ŠคํŒ… ํ”Œ๋žซํผ 12 3. OpenGL 3D ์„ฑ๋Šฅ SPECViewperf ํ…Œ์ŠคํŠธ 13 3.1 OpenGL 3D Performance ํ…Œ์ŠคํŠธ 13 3.1.1 ํ…Œ์ŠคํŠธ ๊ณ„ํš 14 3.1.2 OpenGL Settings ํ…Œ์ŠคํŠธ 17 3.1.3 Default Color Depth, Buffer-Flipping Mode, Vertical Sync ํ…Œ์ŠคํŠธ 22 3.2 OpenGL ๊ทธ๋ž˜ํ”ฝ ์„œ๋ธŒ์‹œ์Šคํ…œ ์ตœ์ ํ™” ์„ฑ๋Šฅํ…Œ์ŠคํŠธ 27 3.3 OpenGL ๊ทธ๋ž˜ํ”ฝ ์„œ๋ธŒ์‹œ์Šคํ…œ ์„ฑ๋Šฅ์ €ํ•˜ ํ™˜๊ฒฝํ…Œ์ŠคํŠธ 29 4. SPECViewperf ํ…Œ์ŠคํŠธ๊ฒฐ๊ณผ ๋ถ„์„ ๋ฐ ํ‰๊ฐ€ 31 5. ๊ฒฐ ๋ก  3

    ํžˆ์น˜ํ•˜์ดํ‚น ๋‚จ๋ฏธ(2) - ์น ๋ ˆ ์ธ์‹ฌ, ์•„๋ฅดํ—จํ‹ฐ๋‚˜ ์ธ์‹ฌ

    Get PDF

    RGB-PTAM ๊ธฐ๋ฐ˜ ์ฟผ๋“œ๋กœํ„ฐ ๋ฌด์ธ๋น„ํ–‰๋กœ๋ด‡์˜ ์ƒํƒœ์ถ”์ •

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (์„์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€, 2016. 2. ์ด๋™์ค€.๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋ฌด์ธ๋น„ํ–‰๋กœ๋ด‡์˜ ๋น„ํ–‰์„ ์œ„ํ•œ ์ƒํƒœ์ถ”์ • ๊ธฐ๋ฒ•์— ๋Œ€ํ•ด์„œ ๊ธฐ์ˆ ํ•œ๋‹ค. ๋…ผ๋ฌธ์—์„œ ์ œ์‹œ๋˜๋Š” ์ƒํƒœ์ถ”์ • ๊ธฐ๋ฒ•์€ ์นด๋ฉ”๋ผ์— ์ ์šฉํ•œ ์˜์ƒ ์•Œ๊ณ ๋ฆฌ์ฆ˜๊ณผ ๊ด€์„ฑ์ธก์ •์žฅ์น˜์˜ ๋ฐ์ดํ„ฐ๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ํ™•์žฅ์นผ๋งŒํ•„ํ„ฐ๋ฅผ ์ ์šฉํ•˜์—ฌ, ์œ„์„ฑํ•ญ๋ฒ•์‹œ์Šคํ…œ์ด ์ฐจ๋‹จ๋œ ํ™˜๊ฒฝ์—์„œ์˜ ์ƒํƒœ์ถ”์ •์„ ๋ณด์žฅํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•˜์—ฌ ์‚ฌ์šฉ๋˜๋Š” ์˜์ƒ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ํšŒ์ƒ‰์กฐ์˜ ๋ช…๋„ ์ •๋ณด๊ฐ€ ์•„๋‹Œ RGB ๊ฐ๊ฐ์˜ ์ƒ‰์ƒ ์ฑ„๋„์„ ์ด์šฉํ•œ๋‹ค. ๋˜ํ•œ ์ œ์•ˆ๋œ ์˜์ƒ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ฃผ๋ณ€ ํ™˜๊ฒฝ์˜ ๊ณต๊ฐ„ ๊ธฐํ•˜ํ•™์  ๊ตฌ์กฐ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ํ‰๋ฉด์ƒ์˜ ์ƒ‰์ƒ๋ณ€ํ™”๋ฅผ ๋ฏผ๊ฐํ•˜๊ฒŒ ๊ฐ์ง€ํ•˜๋ฉฐ, ์ž์‹ ์˜ ์œ„์น˜ ๋ฐ ์ž์„ธ๋ฅผ ์ถ”์ •ํ•œ๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ์ƒํƒœ์ถ”์ • ๊ธฐ๋ฒ•์˜ ์‹คํ—˜ ๊ฒฐ๊ณผ๋ฅผ ์ œ์‹œํ•œ๋‹ค.We introduce a novel state estimation for unmanned aerial vehicle (UAV) based on Simultaneous Localization And Mapping algorithm with RGB color model. The proposed method allows robust and consistent estimation of pose over artificial environments with little texture and plentiful color variation. To achieve this, the method combines high frequency data from inertial measurement unit (IMU) and low frequency data from the vision algorithm based on RGB color model under Extended Kalman Filter framework. The vision algorithm conducts feature extraction on intensities of each RGB color channel instead of grayscale light intensity. While feature extraction on grayscale light intensity detects spatial geometric edges, the extraction on color channels also detects color variation on plane sensitively. The method is applied to estimate the pose of UAV in GPS-denied environments and the experiment is also performed to illustrate the accuracy.Chapter 1 Introduction 1 1.1 Motivation and Objectives 1 1.2 State of the Art 2 1.3 Contribution of this Work 4 Chapter 2 System Description 6 2.1 System overview 6 2.2 Control of Unmanned Aerial Vehicles 7 Chapter 3 State Estimation 10 3.1 IMU Sensor Model 11 3.2 State Representation 12 3.3 Error State Representation 14 3.4 Extended Kalman Filter 16 Chapter 4 Vision algorithm 19 4.1 Parallel Tracking and Mapping 19 4.2 RGB-PTAM 21 4.2.1 Tracking 21 4.2.1.1 Feature extraction 21 4.2.1.2 Matching 25 4.2.1.3 Pose update 25 4.2.2 Mapping 26 Chapter 5 Experiment 27 5.1 Hardwares 27 5.2 Vision algorithm 28 5.3 Experimental Result 29 Chapter 6 Conclusion and Future Work 36 6.1 Conclusion 36 6.2 Future Work 37 Bibliography 38 ์š”์•ฝ 43Maste

    Localization in Low Luminance, Slippery Indoor Environment Using Afocal Optical Flow Sensor and Image Processing

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2017. 8. ์กฐ๋™์ผ.์‹ค๋‚ด ์„œ๋น„์Šค๋กœ๋ด‡์˜ ์œ„์น˜ ์ถ”์ •์€ ์ž์œจ ์ฃผํ–‰์„ ์œ„ํ•œ ํ•„์ˆ˜ ์š”๊ฑด์ด๋‹ค. ํŠนํžˆ ์นด๋ฉ”๋ผ๋กœ ์œ„์น˜๋ฅผ ์ถ”์ •ํ•˜๊ธฐ ์–ด๋ ค์šด ์‹ค๋‚ด ์ €์กฐ๋„ ํ™˜๊ฒฝ์—์„œ ๋ฏธ๋„๋Ÿฌ์ง์ด ๋ฐœ์ƒํ•  ๊ฒฝ์šฐ์—๋Š” ์œ„์น˜ ์ถ”์ •์˜ ์ •ํ™•๋„๊ฐ€ ๋‚ฎ์•„์ง„๋‹ค. ๋ฏธ๋„๋Ÿฌ์ง์€ ์ฃผ๋กœ ์นดํŽซ์ด๋‚˜ ๋ฌธํ„ฑ ๋“ฑ์„ ์ฃผํ–‰ํ•  ๋•Œ ๋ฐœ์ƒํ•˜๋ฉฐ, ํœ  ์—”์ฝ”๋” ๊ธฐ๋ฐ˜์˜ ์ฃผํ–‰๊ธฐ๋ก์œผ๋กœ๋Š” ์ฃผํ–‰ ๊ฑฐ๋ฆฌ์˜ ์ •ํ™•ํ•œ ์ธ์‹์— ํ•œ๊ณ„๊ฐ€ ์žˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์นด๋ฉ”๋ผ ๊ธฐ๋ฐ˜ ๋™์‹œ์  ์œ„์น˜์ถ”์ • ๋ฐ ์ง€๋„์ž‘์„ฑ ๊ธฐ์ˆ (simultaneous localization and mappingSLAM)์ด ๋™์ž‘ํ•˜๊ธฐ ์–ด๋ ค์šด ์ €์กฐ๋„, ๋ฏธ๋„๋Ÿฌ์šด ํ™˜๊ฒฝ์—์„œ ์ €๊ฐ€์˜ ๋ชจ์…˜์„ผ์„œ์™€ ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(afocal optical flow sensorAOFS) ๋ฐ VGA๊ธ‰ ์ „๋ฐฉ ๋‹จ์•ˆ์นด๋ฉ”๋ผ๋ฅผ ์œตํ•ฉํ•˜์—ฌ ๊ฐ•์ธํ•˜๊ฒŒ ์œ„์น˜๋ฅผ ์ถ”์ •ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ–ˆ๋‹ค. ๋กœ๋ด‡์˜ ์œ„์น˜ ์ถ”์ •์€ ์ฃผํ–‰๊ฑฐ๋ฆฌ ์ˆœ๊ฐ„ ๋ณ€ํ™”๋Ÿ‰๊ณผ ๋ฐฉ์œ„๊ฐ ์ˆœ๊ฐ„ ๋ณ€ํ™”๋Ÿ‰์„ ๋ˆ„์  ์œตํ•ฉํ•˜์—ฌ ์‚ฐ์ถœํ–ˆ์œผ๋ฉฐ, ๋ฏธ๋„๋Ÿฌ์šด ํ™˜๊ฒฝ์—์„œ๋„ ์ข€ ๋” ์ •ํ™•ํ•œ ์ฃผํ–‰๊ฑฐ๋ฆฌ ์ถ”์ •์„ ์œ„ํ•ด ํœ  ์—”์ฝ”๋”์™€ AOFS๋กœ๋ถ€ํ„ฐ ํš๋“ํ•œ ์ด๋™ ๋ณ€์œ„ ์ •๋ณด๋ฅผ ์œตํ•ฉํ–ˆ๊ณ , ๋ฐฉ์œ„๊ฐ ์ถ”์ •์„ ์œ„ํ•ด ๊ฐ์†๋„ ์„ผ์„œ์™€ ์ „๋ฐฉ ์˜์ƒ์œผ๋กœ๋ถ€ํ„ฐ ํŒŒ์•…๋œ ์‹ค๋‚ด ๊ณต๊ฐ„์ •๋ณด๋ฅผ ํ™œ์šฉํ–ˆ๋‹ค. ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋Š” ๋ฐ”ํ€ด ๋ฏธ๋„๋Ÿฌ์ง์— ๊ฐ•์ธํ•˜๊ฒŒ ์ด๋™ ๋ณ€์œ„๋ฅผ ์ถ”์ • ํ•˜์ง€๋งŒ, ์นดํŽซ์ฒ˜๋Ÿผ ํ‰ํ‰ํ•˜์ง€ ์•Š์€ ํ‘œ๋ฉด์„ ์ฃผํ–‰ํ•˜๋Š” ์ด๋™ ๋กœ๋ด‡์— ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋ฅผ ์žฅ์ฐฉํ•  ๊ฒฝ์šฐ, ์ฃผํ–‰ ์ค‘ ๋ฐœ์ƒํ•˜๋Š” ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ์™€ ๋ฐ”๋‹ฅ ๊ฐ„์˜ ๋†’์ด ๋ณ€ํ™”๊ฐ€ ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋ฅผ ์ด์šฉํ•œ ์ด๋™๊ฑฐ๋ฆฌ ์ถ”์ • ์˜ค์ฐจ์˜ ์ฃผ์š”์ธ์œผ๋กœ ์ž‘์šฉํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ์— ๋ฌดํ•œ์ดˆ์ ๊ณ„ ์›๋ฆฌ๋ฅผ ์ ์šฉํ•˜์—ฌ ์ด ์˜ค์ฐจ ์š”์ธ์„ ์™„ํ™”ํ•˜๋Š” ๋ฐฉ์•ˆ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ๋กœ๋ด‡ ๋ฌธํ˜• ์‹œ์Šคํ…œ(robotic gantry system)์„ ์ด์šฉํ•˜์—ฌ ์นดํŽซ ๋ฐ ์„ธ๊ฐ€์ง€ ์ข…๋ฅ˜์˜ ๋ฐ”๋‹ฅ์žฌ์งˆ์—์„œ ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ์˜ ๋†’์ด๋ฅผ 30 mm ์—์„œ 50 mm ๋กœ ๋ณ€ํ™”์‹œํ‚ค๋ฉฐ 80 cm ๊ฑฐ๋ฆฌ๋ฅผ ์ด๋™ํ•˜๋Š” ์‹คํ—˜์„ 10๋ฒˆ์”ฉ ๋ฐ˜๋ณตํ•œ ๊ฒฐ๊ณผ, ๋ณธ ๋…ผ๋ฌธ์—์„œ ์ œ์•ˆํ•˜๋Š” AOFS ๋ชจ๋“ˆ์€ 1 mm ๋†’์ด ๋ณ€ํ™” ๋‹น 0.1% ์˜ ๊ณ„ํ†ต์˜ค์ฐจ(systematic error)๋ฅผ ๋ฐœ์ƒ์‹œ์ผฐ์œผ๋‚˜, ๊ธฐ์กด์˜ ๊ณ ์ •์ดˆ์ ๋ฐฉ์‹์˜ ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋Š” 14.7% ์˜ ๊ณ„ํ†ต์˜ค์ฐจ๋ฅผ ๋‚˜ํƒ€๋ƒˆ๋‹ค. ์‹ค๋‚ด ์ด๋™์šฉ ์„œ๋น„์Šค ๋กœ๋ด‡์— AOFS๋ฅผ ์žฅ์ฐฉํ•˜์—ฌ ์นดํŽซ ์œ„์—์„œ 1 m ๋ฅผ ์ฃผํ–‰ํ•œ ๊ฒฐ๊ณผ ํ‰๊ท  ๊ฑฐ๋ฆฌ ์ถ”์ • ์˜ค์ฐจ๋Š” 0.02% ์ด๊ณ , ๋ถ„์‚ฐ์€ 17.6% ์ธ ๋ฐ˜๋ฉด, ๊ณ ์ •์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ๋ฅผ ๋กœ๋ด‡์— ์žฅ์ฐฉํ•˜์—ฌ ๊ฐ™์€ ์‹คํ—˜์„ ํ–ˆ์„ ๋•Œ์—๋Š” 4.09% ์˜ ํ‰๊ท  ์˜ค์ฐจ ๋ฐ 25.7% ์˜ ๋ถ„์‚ฐ์„ ๋‚˜ํƒ€๋ƒˆ๋‹ค. ์ฃผ์œ„๊ฐ€ ๋„ˆ๋ฌด ์–ด๋‘์›Œ์„œ ์˜์ƒ์„ ์œ„์น˜ ๋ณด์ •์— ์‚ฌ์šฉํ•˜๊ธฐ ์–ด๋ ค์šด ๊ฒฝ์šฐ, ์ฆ‰, ์ €์กฐ๋„ ์˜์ƒ์„ ๋ฐ๊ฒŒ ๊ฐœ์„ ํ–ˆ์œผ๋‚˜ SLAM์— ํ™œ์šฉํ•  ๊ฐ•์ธํ•œ ํŠน์ง•์  ํ˜น์€ ํŠน์ง•์„ ์„ ์ถ”์ถœํ•˜๊ธฐ ์–ด๋ ค์šด ๊ฒฝ์šฐ์—๋„ ๋กœ๋ด‡ ์ฃผํ–‰ ๊ฐ๋„ ๋ณด์ •์— ์ €์กฐ๋„ ์ด๋ฏธ์ง€๋ฅผ ํ™œ์šฉํ•˜๋Š” ๋ฐฉ์•ˆ์„ ์ œ์‹œํ–ˆ๋‹ค. ์ €์กฐ๋„ ์˜์ƒ์— ํžˆ์Šคํ† ๊ทธ๋žจ ํ‰ํ™œํ™”(histogram equalization) ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜๋ฉด ์˜์ƒ์ด ๋ฐ๊ฒŒ ๋ณด์ • ๋˜๋ฉด์„œ ๋™์‹œ์— ์žก์Œ๋„ ์ฆ๊ฐ€ํ•˜๊ฒŒ ๋˜๋Š”๋ฐ, ์˜์ƒ ์žก์Œ์„ ์—†์• ๋Š” ๋™์‹œ์— ์ด๋ฏธ์ง€ ๊ฒฝ๊ณ„๋ฅผ ๋šœ๋ ทํ•˜๊ฒŒ ํ•˜๋Š” ๋กค๋ง ๊ฐ€์ด๋˜์Šค ํ•„ํ„ฐ(rolling guidance filterRGF)๋ฅผ ์ ์šฉํ•˜์—ฌ ์ด๋ฏธ์ง€๋ฅผ ๊ฐœ์„ ํ•˜๊ณ , ์ด ์ด๋ฏธ์ง€์—์„œ ์‹ค๋‚ด ๊ณต๊ฐ„์„ ๊ตฌ์„ฑํ•˜๋Š” ์ง๊ต ์ง์„  ์„ฑ๋ถ„์„ ์ถ”์ถœ ํ›„ ์†Œ์‹ค์ (vanishing pointVP)์„ ์ถ”์ •ํ•˜๊ณ  ์†Œ์‹ค์ ์„ ๊ธฐ์ค€์œผ๋กœ ํ•œ ๋กœ๋ด‡ ์ƒ๋Œ€ ๋ฐฉ์œ„๊ฐ์„ ํš๋“ํ•˜์—ฌ ๊ฐ๋„ ๋ณด์ •์— ํ™œ์šฉํ–ˆ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋กœ๋ด‡์— ์ ์šฉํ•˜์—ฌ 0.06 ~ 0.21 lx ์˜ ์ €์กฐ๋„ ์‹ค๋‚ด ๊ณต๊ฐ„(77 sqm)์— ์นดํŽซ์„ ์„ค์น˜ํ•˜๊ณ  ์ฃผํ–‰ํ–ˆ์„ ๊ฒฝ์šฐ, ๋กœ๋ด‡์˜ ๋ณต๊ท€ ์œ„์น˜ ์˜ค์ฐจ๊ฐ€ ๊ธฐ์กด 401 cm ์—์„œ 21 cm๋กœ ์ค„์–ด๋“ฆ์„ ํ™•์ธํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค.์ œ 1 ์žฅ ์„œ ๋ก  1 1.1 ์—ฐ๊ตฌ์˜ ๋ฐฐ๊ฒฝ 1 1.2 ์„ ํ–‰ ์—ฐ๊ตฌ ์กฐ์‚ฌ 6 1.2.1 ์‹ค๋‚ด ์ด๋™ํ˜• ์„œ๋น„์Šค ๋กœ๋ด‡์˜ ๋ฏธ๋„๋Ÿฌ์ง ๊ฐ์ง€ ๊ธฐ์ˆ  6 1.2.2 ์ €์กฐ๋„ ์˜์ƒ ๊ฐœ์„  ๊ธฐ์ˆ  8 1.3 ๊ธฐ์—ฌ๋„ 12 1.4 ๋…ผ๋ฌธ์˜ ๊ตฌ์„ฑ 14 ์ œ 2 ์žฅ ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ 16 2.1 ๋ฌดํ•œ์ดˆ์  ์‹œ์Šคํ…œ(afocal system) 16 2.2 ๋ฐ”๋Š˜๊ตฌ๋ฉ ํšจ๊ณผ 18 2.3 ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ ํ”„๋กœํ† ํƒ€์ž… 20 2.4 ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ ์‹คํ—˜ ๊ณ„ํš 24 2.5 ๋ฌดํ•œ์ดˆ์  ๊ด‘ํ•™ํ๋ฆ„์„ผ์„œ(AOFS) ๋ชจ๋“ˆ ์‹คํ—˜ ๊ฒฐ๊ณผ 29 ์ œ 3 ์žฅ ์ €์กฐ๋„์˜์ƒ์˜ ๋ฐฉ์œ„๊ฐ๋ณด์ • ํ™œ์šฉ๋ฐฉ๋ฒ• 36 3.1 ์ €์กฐ๋„ ์˜์ƒ ๊ฐœ์„  ๋ฐฉ๋ฒ• 36 3.2 ํ•œ ์žฅ์˜ ์˜์ƒ์œผ๋กœ ์‹ค๋‚ด ๊ณต๊ฐ„ ํŒŒ์•… ๋ฐฉ๋ฒ• 38 3.3 ์†Œ์‹ค์  ๋žœ๋“œ๋งˆํฌ๋ฅผ ์ด์šฉํ•œ ๋กœ๋ด‡ ๊ฐ๋„ ์ถ”์ • 41 3.4 ์ตœ์ข… ์ฃผํ–‰๊ธฐ๋ก ์•Œ๊ณ ๋ฆฌ์ฆ˜ 46 3.5 ์ €์กฐ๋„์˜์ƒ์˜ ๋ฐฉ์œ„๊ฐ ๋ณด์ • ์‹คํ—˜ ๊ณ„ํš 48 3.6 ์ €์กฐ๋„์˜์ƒ์˜ ๋ฐฉ์œ„๊ฐ ๋ณด์ • ์‹คํ—˜ ๊ฒฐ๊ณผ 50 ์ œ 4 ์žฅ ์ €์กฐ๋„ ํ™˜๊ฒฝ ์œ„์น˜์ธ์‹ ์‹คํ—˜ ๊ฒฐ๊ณผ 54 4.1 ์‹คํ—˜ ํ™˜๊ฒฝ 54 4.2 ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ์‹คํ—˜ ๊ฒฐ๊ณผ 59 4.3 ์ž„๋ฒ ๋””๋“œ ์‹คํ—˜ ๊ฒฐ๊ณผ 61 ์ œ 5 ์žฅ ๊ฒฐ๋ก  62Docto

    A Study on the Trial Results and Performance Trend of Diesel Main Engine

    Get PDF
    Before delivery, ship-building company has to carry out performance test of main engine exactly in maker's shop and sea trial and then summit standard data so that operators can compare performance of main engine after delivery. โ…ฒ Shipping company and operators have to manage well to keep shipping schedules without problems in main engine. Specially operators have to operate main engine within the limit of operation point, and adjust the related parameters to be operated safely and continuously. Also operators have ability to analyze fouling condition of hull through comparing data gotten from P-V curve and performance results of new building ships in trial with service ships. โ…ฒ In this study, not only compared main engine performance results in shop trial and sea trial, but also investigated performance trend in accordance with the time elapsed for the service ship's diesel engine. โ…ณ They were confirmed as follows. First, shop trial load is higher than sea trial load but ship's speed is satisfied with owner's contract speed. Because the rating of main engine is considered with sea margin and propeller margin. Second as time goes by, the load of service ship increases steadily and other parameters related with main engine shows variable changes depend on the load increment of main engine.Abstract โ…ฒ ์ œ 1 ์žฅ ์„œ ๋ก  1 ์ œ 2 ์žฅ ์‹œ์šด์ „ ์ผ๋ฐ˜ 3 2.1 ์‹œ์šด์ „์˜ ์ •์˜ 3 2.2 ์‹œ์šด์ „์˜ ์กฐ๊ฑด๊ณผ ์ ˆ์ฐจ 5 2.2.1 ๊ณต์žฅ์‹œ์šด์ „ 5 2.2.2 ํ•ด์ƒ์‹œ์šด์ „ 9 2.3 ์ฃผ๊ธฐ๊ด€ ์ถœ๋ ฅ์‚ฐ์ • 15 2.3.1 ์ฃผ๊ธฐ๊ด€ ์ถœ๋ ฅ์˜ ์ •์˜ 15 2.3.2 ์ฃผ๊ธฐ๊ด€ ์„ฑ๋Šฅ๊ณผ ๋งˆ๋ ฅ ์‚ฐ์ถœ 17 2.3.3 ์‹ค๋ฆฐ๋” ์ •์ˆ˜์™€ ๋งˆ๋ ฅ 19 2.4 ์ฃผ๊ธฐ๊ด€ ์„ฑ๋Šฅ ๋ณ€์ˆ˜๊ฐ„์˜ ์ƒ๊ด€๊ด€๊ณ„ ๋ถ„์„ 20 2.5 ์—ฐ๊ตฌ๋Œ€์ƒ ์„ ๋ฐ•์˜ ์ œ์› 24 ์ œ 3 ์žฅ ๊ณต์žฅ์‹œ์šด์ „๊ณผ ํ•ด์ƒ์‹œ์šด์ „ ๋ถ„์„ 25 3.1 ํšŒ์ „์ˆ˜ 25 3.2 ์ง€์‹œํ‰๊ท ์œ ํšจ์••๋ ฅ 30 3.3 ์ตœ๊ณ ํญ๋ฐœ์••๋ ฅ๊ณผ ์••์ถ•์••๋ ฅ 33 3.4 ๋ฐฐ๊ธฐ๊ฐ€์Šค์˜จ๋„์™€ ์†Œ๊ธฐ์˜จ๋„ 38 3.5 ์—ฐ๋ฃŒ์†Œ๋น„์œจ 45 ์ œ 4 ์žฅ ์šดํ•ญ์„ ๋ฐ•์˜ ์„ฑ๋Šฅ ๋ณ€ํ™” ๋ถ„์„ 47 4.1 ํ”„๋กœํŽ ๋Ÿฌ ํšŒ์ „์ˆ˜ 47 4.2 ์ •๋ฏธํ‰๊ท ์œ ํšจ์••๋ ฅ๊ณผ ๋ฐฐ๊ธฐ๊ฐ€์Šค์˜จ๋„ 50 4.3 ์ตœ๊ณ ํญ๋ฐœ์••๋ ฅ๊ณผ ์••์ถ•์••๋ ฅ 54 4.4 V.I.T. ์ธ๋ฑ์Šค 63 4.5 ๊ณผ๊ธ‰๊ธฐํšŒ์ „์ˆ˜์™€ ์†Œ๊ธฐ์••๋ ฅ 65 4.6 ๊ณผ๊ธ‰๊ธฐ ํ„ฐ๋นˆ ์ž…ยท์ถœ๊ตฌ ์˜จ๋„ ์ฐจ์ด 72 4.7 ์—์–ด์ฟจ๋Ÿฌ ๋ƒ‰๊ฐํšจ์œจ 75 4.8 ์‹ค๋ฆฐ๋”์˜ค์ผ ์†Œ๋น„์œจ 78 ์ œ 5 ์žฅ ๊ฒฐ ๋ก  80 ์ฐธ๊ณ  ๋ฌธํ—Œ 82 ๋ถ€ ๋ก 8

    ํžˆ์น˜ํ•˜์ดํ‚น ๋‚จ๋ฏธ(1) - ๋ฐœํŒŒ๋ผ์ด์†Œ ๊ฐ€๋Š” ๊ธธ

    Get PDF
    ํ•˜๋Š˜๊ณผ ๋ฌด์ฒ™ ๊ฐ€๊นŒ์ด ๋‹ฟ์•˜๋‹ค๊ณ  ์ƒ๊ฐํ–ˆ๋Š”๋ฐ, ์—ฌ์ „ํžˆ ๊ตฌ๋ฆ„ ํ•œ ์  ์—† ๋Š” ํŒŒ์•„๋žŒ์€ ๋ฉ€๊ณ  ๋†’์•˜๋‹ค. ์—ฌ๋ฏผ ์˜ท ์‚ฌ์ด๋กœ ์™€ ๋‹ฟ๋Š” ์•ˆ๋ฐ์Šค 4,000๋ฏธ ํ„ฐ์˜ ๋ฐ”๋žŒ์€ ์ฐจ๊ณ  ์ฒญ๋ช…ํ•˜๋‹ค. ์–ธ์ œ๋‚˜์ฒ˜๋Ÿผ ํŠธ๋Ÿญ ๋’ค ์นธ์—์„œ, ๋น„ํฌ์žฅ ๊ธธ ์ด ๋งŒ๋“œ๋Š” ๋ถˆ๊ทœ์น™์ ์ธ ๋œ์ปน๊ฑฐ๋ฆผ์˜ ๋ฆฌ๋“ฌ์— ์‹ค๋ ค ์šฐ๋ฆฌ๋Š” ๋‹ค์Œ ๋ชฉ์ ์ง€๋กœ ํ–ฅํ•˜๋Š” ๊ธธ์ด์—ˆ๋‹ค. ํ™๋จผ์ง€ ๋’ค์ง‘์–ด์“ฐ๊ณ  ๊พธ๋ฒ…๊พธ๋ฒ… ์กธ๊ณ  ์žˆ๋Š” ๋™๋ฃŒ ํ•˜๋‚˜, ๊ฑฐ์„ผ ๋ฐ”๋žŒ์—๋„ ์ŠคํŽ˜์ธ์–ด ์ฑ…์„ ์žก๊ณ  ์ค‘์–ผ์ค‘์–ผํ•˜๋Š” ๋™๋ฃŒ ํ•˜๋‚˜. ๋ฒ—์–ด๋†“ ์€ ์‹ ๋ฐœ๊ณผ ๊ฑฐ๋Œ€ํ•œ ๋ฐฐ๋‚ญ ์—ฌ๋Ÿฌ ๊ฐœ, ๊ทธ ์™ธ์—๋Š” ์ธ๊ฐ„์˜ ํ”์ ์€ ์ฐพ์•„ ๋ณผ ์ˆ˜ ์—†๋‹ค. ์˜ค๋กœ์ง€ ๊ฑฐ๋Œ€ํ•œ ์•ˆ๋ฐ์Šค ์‚ฐ๋งฅ์˜ ๊ธฐ์•”๊ดด์„, ์ด๊ตญ์ ์ธ ์„ ์ธ์žฅ, ํ‚ค ์ž‘์€ ๋‚˜๋ฌด๋“ค, ๊ทธ๋ฆฌ๊ณ  ๋๋„ ์—†์ด ์ด์–ด์ง„ ๊ตฌ๋ถˆ๊ตฌ๋ถˆํ•œ ๊ธธ๋งŒ์ด ๋ˆˆ์•ž์— ํŽผ์ณ์ง€๊ณ  ์žˆ์—ˆ๋‹ค. ๋น„ํ˜„์‹ค์ ์ผ ์ •๋„๋กœ ์„ ๋ช…ํ•œ ์ƒ‰๊ฐ๊ณผ ๊ทธ ์ƒ‰์˜ ํ–ฅ๊ธฐ๊นŒ ์ง€ ๋Š๋ผ๋˜ ์ˆœ๊ฐ„์—, ๋‚ด๊ฐ€ ์‚ด๋˜ ํšŒ์ƒ‰์˜ ๋„์‹œ์—์„œ ์ด ์—ฌํ–‰์„ ์ค€๋น„ํ•˜๋Š๋ผ ๋ถ„์ฃผํ–ˆ๋˜ ์‹œ๊ฐ„๋“ค์€ ์–ด๋Š๋ง ์•„๋ จํ•ด์ ธ ์žˆ์—ˆ๋‹ค. ๋‚จ๋ฏธ ๋Œ€๋ฅ™์— ์˜จ ๋ชธ์œผ๋กœ ๋ถ€๋”ชํ˜€ ๋ณด์ž!๋Š” ๋ง‰์—ฐํ•˜๊ณ  ์•„๋ จํ•œ ํฌ๋ถ€๋ฅผ ํ’ˆ๊ณ , ์—ฌํ–‰ ๊ณ„ํš์„ ์„ธ์šฐ๊ธฐ ์‹œ์ž‘ํ–ˆ๋˜ ๊ฒƒ์€ 3์›”. ์‹œ๊ฐ„์ด ์ง€๋‚˜๋ฉด์„œ ํŽ˜ ๋ฃจ, ๋ณผ๋ฆฌ๋น„์•„, ์น ๋ ˆ, ์•„๋ฅดํ—จํ‹ฐ๋‚˜๋กœ ์ด์–ด์ง€๋Š” ๊ฒฝ๋กœ๊ฐ€ ์„ธ์›Œ์ง€๊ณ  ํ•จ๊ป˜ ํ•  ์‚ฌ๋žŒ๋“ค์ด ๊ตฌ์ฒดํ™”๋˜์—ˆ๋‹ค. ์—ฌํ–‰์„ ์ค€๋น„ํ•˜๋Š” ๊ณผ์ •์€ ์—ฌํ–‰๋งŒํผ์ด๋‚˜ ๋‹ค ์‚ฌ๋‹ค๋‚œํ–ˆ๋‹ค. ์—ฌํ–‰ ๊ฒฝ๋น„๊ฐ€ ๋ถ€์กฑํ•ด, ์šฐ๋ฆฌ์˜ ๊ณ„ํš์„ ๊ธฐํš์„œ๋กœ ๋งŒ๋“ค์–ด ๊ธฐ ์—…๊ณผ ํ•™๊ต๋ฅผ ์ฐพ์•„๋‹ค๋…”์ง€๋งŒ ํ—ˆ์‚ฌ์ด๊ธฐ ์ผ์‘ค์˜€์œผ๋ฉฐ, ๋ฉค๋ฒ„๊ฐ€ ๋ฐ”๋€Œ๊ธฐ๋„ ํ•˜ ๊ณ , ์šฐ๋ฆฌ๋ผ๋ฆฌ์˜ ์˜๊ฒฌ ์กฐ์œจ์ด ํž˜๋“ค์–ด ๋งˆ์Œ์˜ ์ƒ์ฒ˜๋ฅผ ์ž…๊ธฐ๋„ ํ–ˆ๋‹ค. ํž˜ ์ด ๋น ์ง€๋Š” ์™€์ค‘์—๋„ ํ•ญ๊ณต๊ถŒ ์˜ˆ์•ฝ๊ณผ ๋ฏธ๊ตญ ๋น„์ž ์‹ ์ฒญ์„ ๋น„๋กฏํ•œ ์˜จ๊ฐ– ์ผ ๋“ค์€ ํ•˜๋ฃจํ•˜๋ฃจ๋ฅผ ๊ฝ‰๊ฝ‰ ์ฑ„์› ๋‹ค. ์ •๋ง ๋‚ด๊ฐ€ ๋‚จ๋ฏธ ๋•…์„ ๋ฐŸ๊ฒŒ ๋ ๊นŒ, ์Šค์Šค ๋กœ๋„ ์‹ค๊ฐํ•˜์ง€ ๋ชปํ•˜๋Š” ์‚ฌ์ด์—๋„ ์šฐ๋ฆฌ๊ฐ€ ๋– ๋‚  6์›” 30์ผ์€ ๊พธ์ค€ํžˆ ๋‹ค ๊ฐ€์™€์„œ, ๋‚œ ๋งˆ์นจ๋‚ด ๋‚จ๋ฏธ๋กœ ํ–ฅํ•˜๋Š” ๋น„ํ–‰๊ธฐ ์•ž์—์„œ ๋ถ•๋ถ•๊ฑฐ๋ฆฌ๋Š” ๋งˆ์Œ์„ ๋‹ค์žก์œผ๋ฉฐ ์„ฐ๋‹ค. ๋„ค ๋ช…์˜ ๋™๋ฃŒ, ์—ฌ๊ธฐ์ €๊ธฐ์„œ ํž˜๊ฒน๊ฒŒ ์ง€์›๋ฐ›์€ ์ฒด์žฌ๋น„, ์ปค๋‹ค๋ž€ ๋ฐฐ๋‚ญ ํ•˜๋‚˜, ๊ทธ๋ฆฌ๊ณ  ๋‚จ๋ฏธ ๋Œ€๋ฅ™์„ ํžˆ์น˜ํ•˜์ดํ‚น๋งŒ์œผ๋กœ ๋ˆ„๋น„๊ฒ ๋‹ค๋Š” ๋‘๊ทผ๊ฑฐ๋ฆฌ๋Š” ๊ฟˆ์ด ๋‚˜์™€ ํ•จ๊ป˜ ๋†“์—ฌ์žˆ์—ˆ๋‹ค. [์•ˆ์†Œ์—ฐ

    A study on the Revitalization of Yangsan ICD

    Get PDF
    Abstract Yang San ICD(Inland Container Depot) had performed an important role for Busan Port and Korea's port & logistics industries, increasing international logistics cmpetitiveness and containers transportation competitiveness in Busan region dealing with 1330 thousand TEU in 2005 since its opening in Mar. 2000. However, it is necessary to seek for the new measure to revitalize Yangsan ICD since its cargo volume decreased rapidly due to the opening of Busan new port and Busan new port hinterland in 2006. Therefor, this study constructed a evaluation model using AHP(analytic hierarchy process) and conducted a survey targeting local businesses and persons concerned in Yang San ICD to seek a measure for revitalization. The result of the research is that Yang San ICD needs to switch functions to logistics centers(terminal facilities, logistics warehouse) for revitalization considering its advantage of facility location and through extension of utilization period and securing building-to-land ratio, existing and new businesses' stable activity should be guaranteed. Furthermore, utilizing facilities such as railway station in ICD, active railway revitalization policy may increase cargo volume and Yan San ICD should perform its role as an inland logistics depot through revitalization of railway freight transportation in the national logistics system focusing on road freight transportation.ํ‘œ ๋ชฉ์ฐจ โ…ด ๊ทธ๋ฆผ ๋ชฉ์ฐจ โ…ต Abstract โ…ถ ์ œ 1 ์žฅ ์„œ๋ก  1.1 ์—ฐ๊ตฌ์˜ ๋ฐฐ๊ฒฝ๊ณผ ๋ชฉ์  1 1.2 ์—ฐ๊ตฌ์˜ ๋ฐฉ๋ฒ•๊ณผ ๊ตฌ์„ฑ 2 ์ œ 2 ์žฅ ์–‘์‚ฐ ICD ์šด์˜ ํ˜„ํ™ฉ ๋ฐ ์‹คํƒœ 2.1 ์–‘์‚ฐ ICD ์šด์˜ํ˜„ํ™ฉ 5 2.1.1 ๊ฐœ์š” 5 2.1.2 ๊ธฐ๋Šฅ 9 2.1.3 ์šด์˜ํ˜„ํ™ฉ 13 2.1.4 ๋ฌผ๋™๋Ÿ‰ ํ˜„ํ™ฉ 17 2.2 ๋‚ด๋ฅ™๋ฌผ๋ฅ˜๊ธฐ์ง€ ์šด์˜ ํ˜„ํ™ฉ 21 2.2.1 ๋‚ด๋ฅ™๋ฌผ๋ฅ˜๊ธฐ์ง€ ํ˜„ํ™ฉ 13 2.2.2 ๋‚ด๋ฅ™๋ฌผ๋ฅ˜๊ธฐ์ง€ ๋ฌผ๋™๋Ÿ‰ ํ˜„ํ™ฉ 28 2.3 ์–‘์‚ฐ ICD ๋ฌธ์ œ์  30 ์ œ 3 ์žฅ ์„ ํ–‰์—ฐ๊ตฌ ๊ณ ์ฐฐ ๋ฐ ์—ฐ๊ตฌ์˜ ๋ชจํ˜• 3.1 ์„ ํ–‰์—ฐ๊ตฌ ๊ณ ์ฐฐ 33 3.2 ์—ฐ๊ตฌ์˜ ๋ชจํ˜• 36 3.2.1 AHP(Analytic Hierarchy Process) 36 3.2.2 ์—ฐ๊ตฌ์˜ ๊ตฌ์„ฑ 40 ์ œ 4 ์žฅ ์–‘์‚ฐ ICD ํ™œ์„ฑํ™”๋ฅผ ์œ„ํ•œ ๋ฐฉ์•ˆ 4.1 ์‘๋‹ต์ž ํ˜„ํ™ฉ 42 4.2 ์–‘์‚ฐ ICD ํ™œ์„ฑํ™”๋ฅผ ์œ„ํ•œ ์ค‘์š”๋„ ๋ถ„์„ 43 4.2.1 ํ‰๊ฐ€๊ธฐ์ค€์˜ ์ค‘์š”๋„ 43 4.2.2 ์„ธ๋ถ€ ํ‰๊ฐ€์š”์ธ์˜ ์ค‘์š”๋„ 44 4.2.3 ํ‰๊ฐ€์š”์ธ๋ณ„ ์ค‘์š”๋„ 46 4.3 ์–‘์‚ฐ ICD ํ™œ์„ฑํ™”๋ฅผ ์œ„ํ•œ ์ค‘์š”๋„ ๋น„๊ต ๋ถ„์„ 48 4.3.1 ํ‰๊ฐ€๊ธฐ์ค€์˜ ์ค‘์š”๋„ ๋น„๊ต 48 4.3.2 ์„ธ๋ถ€ ํ‰๊ฐ€์š”์ธ์˜ ์ค‘์š”๋„ ๋น„๊ต 49 4.3.3 ํ‰๊ฐ€์š”์ธ๋ณ„ ์ค‘์š”๋„ ๋น„๊ต 51 4.4 ์‹œ์‚ฌ์  52 ์ œ 5 ์žฅ ๊ฒฐ๋ก  5.1 ์—ฐ๊ตฌ ๊ฒฐ๊ณผ์˜ ์š”์•ฝ 55 5.2 ์—ฐ๊ตฌ์˜ ํ•œ๊ณ„์™€ ํ–ฅํ›„ ์—ฐ๊ตฌ ๋ฐฉํ–ฅ 57 ์ฐธ๊ณ ๋ฌธํ—Œ 59 ๋ถ€๋ก 6

    Easily Removable Ureteral Catheters for Internal Drainage in Children: A Preliminary Report

    Get PDF
    PURPOSE: We review our experience using a new and easily removable ureteral catheter in patients who underwent complicated ureteral reimplantation. Our goal was to shorten hospital stay and lower anxiety during catheter removal without fear of postoperative ureteral obstruction. MATERIALS AND METHODS: Between April 2009 and September 2010, nine patients who underwent our new method of catheter removal after ureteral reimplantation were enrolled. Patients who underwent simple ureteral reimplantation were excluded from the study. Following ureteral reimplantation, a combined drainage system consisting of a suprapubic cystostomy catheter and a ureteral catheter was installed. Proximal external tubing was clamped with a Hem-o-lok clamp and the rest of the external tubing was eliminated. Data concerning the age and sex of each patient, reason for operation, method of ureteral reimplantation, and postoperative parameters such as length of hospital stay and complications were recorded. RESULTS: Of the nine patients, four had refluxing megaureter, four had a solitary or non-functional contralateral kidney and one had ureteral stricture due to a previous anti-reflux operation. The catheter was removed at postoperative week one. The mean postoperative hospital stay was 2.4 days (range 1-4 days), and the mean follow-up was 9.8 months. None of the patients had postoperative ureteral obstructions, and there were no cases of migration or dislodgement of the catheter. CONCLUSION: Our new method for removing the ureteral catheter would shorten hospital stays and lower levels of anxiety when removing ureteral catheters in patients with a high risk of postoperative ureteral obstruction.ope

    Dynamics of heart rate variability in patients with type 2 diabetes mellitus during spinal anaesthesia: prospective observational study

    Get PDF
    BACKGROUND: Little is known about the changes in autonomic function during spinal anaesthesia in type 2 diabetic patients. The purpose of the study was to assess the influence of spinal anaesthesia on the heart rate variability in type 2 diabetic patients according to the glycated hemoglobin (HbA1c) level. METHODS: Sixty-six patients who were scheduled for elective orthostatic lower limb surgery were assigned to three groups (n = 22, each) according to HbA1c; controlled diabetes mellitus (HbA1c 7 %) and the control group. The heart rate variability was measured 10 min before (T0), and at10 min (T1), 20 min (T2) and 30 min (T3) after spinal anaesthesia. RESULTS: Before spinal anaesthesia, total, low-and high-frequency power were significantly lower in the uncontrolled diabetec group than in other group (p < 0.05). During spinal anaesthesia, total, low- and high-frequency powers were did not change in the uncontrolled diabetec group while the low-frequency power in the controlled diabetec group was significantly depressed (p < 0.05). The ratio of low-to high-frequency was comparable among the groups, while it was reduced at T1-2 than at T0 in all the groups. The blood pressures were higher in the uncontrolled diabetec group than in the other groups. CONCLUSIONS: Spinal anaesthesia had an influence on the cardiac autonomic modulation in controlled diabetec patients, but not in uncontrolled diabetec patients. There were no differences in all haemodynamic variables during an adequate level of spinal anaesthesia in controlled and uncontrolled type 2 DM. TRIAL REGISTRATION: ClinicalTrials.gov NCT02137057.ope
    • โ€ฆ
    corecore