6 research outputs found

    ๊ณต๋™ ์ž„๋ฒ ๋”ฉ ๋ฐ ๋ถ„์‚ฐ ์ž„๋ฒ ๋”ฉ ๋ฐฉ์‹์„ ํ†ตํ•œ ๊ต์ฐจ ๋ชจ๋‹ฌ ํ‘œํ˜„ ํ•™์Šต ๋ฐฉ๋ฒ• ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ •๋ณด๊ณตํ•™๋ถ€, 2022.2. ์ตœ์ง„์˜.๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ต์ฐจ ๋ชจ๋‹ฌ ํ‘œํ˜„ ํ•™์Šต์—์„œ ๋ฐœ์ƒํ•  ์ˆ˜ ์žˆ๋Š” ๋ฌธ์ œ์ ๋“ค์„ ๊ฐœ์„ ํ•˜๊ธฐ ์œ„ํ•œ ๋‘ ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ์ฒซ ์งธ, ๊ธฐ์กด์˜ ๊ณต๋™ ์ž„๋ฒ ๋”ฉ ๋ฐฉ์‹์˜ ๊ต์ฐจ ๋ชจ๋‹ฌ ํ‘œํ˜„ ํ•™์Šต ๋ชจ๋ธ์ด ์ƒ์ดํ•œ ๋ชจ๋‹ฌ ๋ฐ์ดํ„ฐ ์‚ฌ์ด์˜ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ธฐ ์–ด๋ ค์šด ๋‹จ์ ์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•˜์—ฌ, ๋ถ„์‚ฐ ์ž„๋ฒ ๋”ฉ ๋ฐฉ์‹์˜ ๊ต์ฐจ ๋ชจ๋‹ฌ ํ•™์Šต ๋ชจ๋ธ์„ ์ œ์•ˆํ•œ๋‹ค. ๋ถ„์‚ฐ ์ž„๋ฒ ๋”ฉ ๋ฐฉ์‹์˜ ํ•™์Šต ๋ชจ๋ธ์€ ๋จผ์ € ๊ฐ ๋ชจ๋‹ฌ๋งˆ๋‹ค ๋…๋ฆฝ์ ์œผ๋กœ ๋‹จ๋… ๋ชจ๋‹ฌ ํ‘œํ˜„ ํ•™์Šต์„ ์ˆ˜ํ–‰ํ•จ์œผ๋กœ์จ ๊ฐ ๋ชจ๋‹ฌ๋งˆ๋‹ค ํŠนํ™”๋œ ์ž„๋ฒ ๋”ฉ ๊ณต๊ฐ„์„ ํ•™์Šตํ•œ๋‹ค. ๊ทธ ํ›„ ๊ต์ฐจ ๋ชจ๋‹ฌ ํ‘œํ˜„์„ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•ด ์—ฌ๋Ÿฌ ๋ชจ๋‹ฌ์˜ ์ž„๋ฒ ๋”ฉ ๊ณต๊ฐ„์‚ฌ์ด๋ฅผ ์—ฐ๊ฒฐํ•˜๋Š” ์—ฐ์ƒํ•™์Šต ๋ชจ๋“ˆ์„ ํ•™์Šตํ•œ๋‹ค. ๋‘ ๋‹จ๊ณ„๋ฅผ ๊ฑฐ์น˜๋Š” ํ•™์Šต ๊ณผ์ •์„ ํ†ตํ•ด ์ œ์•ˆํ•˜๋Š” ๋ชจ๋ธ์€ ์ƒ์ดํ•œ ๋ชจ๋‹ฌ๋“ค ๊ฐ„์˜ ๊ต์ฐจ ๋ชจ๋‹ฌ ํ‘œํ˜„ํ•™์Šต๋„ ์ž˜ ์ˆ˜ํ–‰ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์Œ์ด ์ฃผ์–ด์ง€์ง€ ์•Š์€ ๊ต์ฐจ ๋ชจ๋‹ฌ ๋ฐ์ดํ„ฐ๋„ ํ™œ์šฉํ•˜์—ฌ ํ•™์Šตํ•  ์ˆ˜ ์žˆ๋‹ค๋Š” ์žฅ์ ์„ ๊ฐ€์ง„๋‹ค. ์ƒ์ดํ•œ ๋ชจ๋‹ฌ ๊ด€๊ณ„ ์ค‘ ํ•˜๋‚˜์ธ ์‹œ๊ฐ๊ณผ ์ฒญ๊ฐ ๋ชจ๋‹ฌ ๊ฐ„์˜ ๋ฐ์ดํ„ฐ ์ƒ์„ฑ ์‹คํ—˜์—์„œ ์ œ์•ˆํ•˜๋Š” ๋ฐฉ๋ฒ•์€ ๊ธฐ์กด์˜ ๊ณต๋™ ์ž„๋ฒ ๋”ฉ ๋ฐฉ์‹์˜ ๋ชจ๋ธ๋ณด๋‹ค ํ–ฅ์ƒ๋œ ์„ฑ๋Šฅ์„ ๊ฒ€์ฆํ•˜์˜€๋‹ค. ๋‘˜ ์งธ, ๊ต์ฐจ ๋ชจ๋‹ฌ ํ‘œํ˜„ ํ•™์Šต์„ ์œ„ํ•ด์„œ๋Š” ๋ชจ๋‹ฌ๊ฐ„ ์Œ์„ ์ด๋ฃจ๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ํ•„์ˆ˜์ ์ด์ง€๋งŒ ์‹ค์ œ ์‘์šฉ๋ถ„์•ผ์—์„œ ์ถฉ๋ถ„ํ•œ ์ˆ˜์˜ ๋ฐ์ดํ„ฐ ์Œ์„ ํ™•๋ณดํ•˜๋Š” ๊ฒƒ์€ ์–ด๋ ต๋‹ค. ์ด๋Ÿฌํ•œ ๋ฌธ์ œ์ ์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ๊ต์ฐจ ๋ชจ๋‹ฌ ํ‘œํ˜„ ํ•™์Šต์„ ์œ„ํ•œ ๋Šฅ๋™์  ํ•™์Šต ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ํŠนํžˆ ๊ต์ฐจ ๋ชจ๋‹ฌ ํ‘œํ˜„ ํ•™์Šต ๊ด€๋ จ ์‘์šฉ๋ถ„์•ผ ์ค‘ ํ•˜๋‚˜์ธ ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ๋ฐ˜ํ™˜์— ๋Œ€ํ•œ ๋Šฅ๋™์  ํ•™์Šต์„ ์ œ์•ˆํ•œ๋‹ค. ๊ธฐ์กด์˜ ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ๋ฐ˜ํ™˜์— ๋Œ€ํ•œ ๋Šฅ๋™์  ํ•™์Šต ์‹œ๋‚˜๋ฆฌ์˜ค๋Š” ์ตœ์‹ ์˜ ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ๋ฐ˜ํ™˜ ๋ฐ์ดํ„ฐ์…‹์— ์ ์šฉํ•˜๊ธฐ ์–ด๋ ต๊ธฐ ๋•Œ๋ฌธ์—, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์šฐ์„  ์ตœ์‹ ์˜ ๋ฐ์ดํ„ฐ์…‹์— ์ ํ•ฉํ•œ ๋Šฅ๋™์  ํ•™์Šต ์‹œ๋‚˜๋ฆฌ์˜ค๋ฅผ ๋จผ์ € ์ œ์•ˆํ•œ๋‹ค. ์ฃผ์–ด์ง„ ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ์Œ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•˜์—ฌ ์‚ฌ๋žŒ์—๊ฒŒ ๋ถ„๋ฅ˜ ๋ผ๋ฒจ์„ ์š”์ฒญํ•˜๋Š” ๊ธฐ์กด์˜ ์‹œ๋‚˜๋ฆฌ์˜ค์™€๋Š” ๋‹ฌ๋ฆฌ, ์ œ์•ˆํ•˜๋Š” ์‹œ๋‚˜๋ฆฌ์˜ค๋Š” ์Œ์ด ์ฃผ์–ด์ง€์ง€ ์•Š์€ ์ด๋ฏธ์ง€ ํ˜น์€ ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•˜์—ฌ ์‚ฌ๋žŒ์—๊ฒŒ ๋‚˜๋จธ์ง€ ๋ชจ๋‹ฌ๋ฆฌํ‹ฐ์˜ ๋ฐ์ดํ„ฐ๋ฅผ ์š”์ฒญํ•˜์—ฌ ์Œ ๋ฐ์ดํ„ฐ๋ฅผ ํ™•๋ณดํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค. ๋˜ํ•œ ์ œ์•ˆํ•˜๋Š” ์‹œ๋‚˜๋ฆฌ์˜ค์— ์ ํ•ฉํ•œ ๋Šฅ๋™์  ํ•™์Šต ์•Œ๊ณ ๋ฆฌ์ฆ˜๋„ ์ œ์•ˆํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ๋ฐ˜ํ™˜์—์„œ ์ฃผ๋กœ ์‚ฌ์šฉ๋˜๋Š” ์ตœ๋Œ€ ํžŒ์ง€ ํŠธ๋ฆฌํ”Œ๋ › ์†์‹คํ•จ์ˆ˜์— ๊ฐ€์žฅ ์˜ํ–ฅ๋ ฅ์„ ๋งŽ์ด ๋ผ์น  ๊ฒƒ์œผ๋กœ ์ƒ๊ฐ๋˜๋Š” ๋ฐ์ดํ„ฐ๋ฅผ ์„ ๋ณ„ํ•œ๋‹ค. ์ด๋ฅผ ์œ„ํ•ด ํŠน์ • ๋ฐ์ดํ„ฐ๊ฐ€ ์†์‹คํ•จ์ˆ˜์— ์˜ํ–ฅ๋ ฅ์„ ๋ฏธ์น  ์ˆ˜ ์žˆ๋Š” ์กฐ๊ฑด์„ ์ •์˜ํ•˜๊ณ , ์ •์˜๋œ ์กฐ๊ฑด์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ๋ฐ์ดํ„ฐ๊ฐ€ ์†์‹คํ•จ์ˆ˜์— ๋ฏธ์น˜๋Š” ์˜ํ–ฅ๋ ฅ ์ ์ˆ˜๋ฅผ ์ถ”์ •ํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์˜ํ–ฅ๋ ฅ ์ ์ˆ˜๊ฐ€ ๊ฐ€์žฅ ๋†’์€ ์ˆœ์„œ๋Œ€๋กœ ๋ฐ์ดํ„ฐ๋ฅผ ์„ ํƒํ•˜์—ฌ ์‚ฌ๋žŒ์—๊ฒŒ ๋‚˜๋จธ์ง€ ์Œ ๋ฐ์ดํ„ฐ๋ฅผ ์ œ๊ณตํ•ด์ค„ ๊ฒƒ์„ ์š”์ฒญํ•œ๋‹ค. ์ตœ์‹ ์˜ ์ด๋ฏธ์ง€-ํ…์ŠคํŠธ ๋ฐ์ดํ„ฐ์…‹์—์„œ์˜ ์ œ์•ˆํ•˜๋Š” ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ๋ฌด์ž‘์œ„๋กœ ์Œ ๋ฐ์ดํ„ฐ๋ฅผ ํ™•๋ณดํ•˜๋Š” ๊ฒƒ๋ณด๋‹ค ํ•™์Šต๋ฐ์ดํ„ฐ ์ˆ˜ ๋Œ€๋น„ ํ–ฅ์ƒ๋œ ์„ฑ๋Šฅ์„ ๋‹ฌ์„ฑํ•˜๋Š” ๊ฒƒ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค.In this dissertation, we propose two methods to overcome problems that may occur in cross-modal representation learning. First, in order to overcome the problem that the existing joint embedding based model is difficult to learn relation among data from heterogeneous modalities, we propose a cross-modal representation learning model adopting the distributed embedding method. The proposed model first learns intra-modal association by training a specialized embedding space for each modality with single-modal representation learning. Then the proposed model learns cross-modal association by introducing associator, which connects the embedding spaces of multiple modalities. To separate the learning process of intra-modal association and cross-modal association, the model parameters involved in intra-modal association are not updated during training of cross-modal association. Through the two-step learning process, the proposed model can well perform cross-modal representation learning among heterogeneous modalities. Furthermore, the proposed model has the advantage of utilizing unpaired data for learning. We validated the proposed method in the cross-modal data generation task between visual and auditory modalities, which is one of the heterogeneous modal relationships. The proposed method achieves improved performance compared to the existing joint-embedding based models. Second, though cross-modal paired data is essential for cross-modal representation learning, securing a sufficient number of paired data is too difficult in practical applications. To mitigate data shortage problem, we propose an active learning method for cross-modal representation learning. In particular, we propose active learning for image-text retrieval, which is one of the most popular applications related to cross-modal representation learning. Since the existing active learning scenario for image-text retrieval can not be applied to the recent image-text retrieval benchmarks, we first propose an active learning scenario feasible for the recent benchmarks. In contrast to the existing scenario where a category label for a given image-text pair data is queried to the human experts, in the proposed scenario, unpaired image or text data are given and human experts are requested to pair the unpaired data. We also proposed an active learning algorithm for the proposed scenario. The proposed algorithm selects the data that is expected to have the most influence on the max-hinge triplet loss function, which is mainly adopted loss function in recent image-text retrieval method. To this end, we define the condition that data can influence the loss function, and estimate the influence score (referred to as HN-Score) of the data on the loss function based on the defined condition. The proposed algorithm selects the data of the highest score. We validate the effectiveness of the proposed active learning algorithm through the various experiments on recent image-text retrieval benchmarks.1 Introduction 1 2 Preliminary 4 2.1 Associative Learning in Human Brain 4 2.2 Cross-modal Representation Learning 6 2.3 Active Learning 8 3 Distributed Embedding Model 15 3.1 Contribution 15 3.2 Motivation 17 3.3 Graphical Modeling 19 3.4 Realization 22 3.5 Experiment 30 3.6 Summary 47 4 Cross-modal Active Learning 48 4.1 Contribution 48 4.2 Proposed Active Learning for ITR 51 4.3 Experiments 59 4.4 Summary 69 4.5 Appendix 70 5 Conclusion 93๋ฐ•

    A machine learning approach to Structural Health Monitoring with a view towards wind turbines

    Get PDF
    The work of this thesis is centred around Structural Health Monitoring (SHM) and is divided into three main parts. The thesis starts by exploring di๏ฟฝerent architectures of auto-association. These are evaluated in order to demonstrate the ability of nonlinear auto-association of neural networks with one nonlinear hidden layer as it is of great interest in terms of reduced computational complexity. It is shown that linear PCA lacks performance for novelty detection. The novel key study which is revealed ampli๏ฟฝes that single hidden layer auto-associators are not performing in a similar fashion to PCA. The second part of this study concerns formulating pattern recognition algorithms for SHM purposes which could be used in the wind energy sector as SHM regarding this research ๏ฟฝeld is still in an embryonic level compared to civil and aerospace engineering. The purpose of this part is to investigate the e๏ฟฝectiveness and performance of such methods in structural damage detection. Experimental measurements such as high frequency responses functions (FRFs) were extracted from a 9m WT blade throughout a full-scale continuous fatigue test. A preliminary analysis of a model regression of virtual SCADA data from an o๏ฟฝshore wind farm is also proposed using Gaussian processes and neural network regression techniques. The third part of this work introduces robust multivariate statistical methods into SHM by inclusively revealing how the in uence of environmental and operational variation a๏ฟฝects features that are sensitive to damage. The algorithms that are described are the Minimum Covariance Determinant Estimator (MCD) and the Minimum Volume Enclosing Ellipsoid (MVEE). These robust outlier methods are inclusive and in turn there is no need to pre-determine an undamaged condition data set, o๏ฟฝering an important advantage over other multivariate methodologies. Two real life experimental applications to the Z24 bridge and to an aircraft wing are analysed. Furthermore, with the usage of the robust measures, the data variable correlation reveals linear or nonlinear connections

    Deep Active Learning Explored Across Diverse Label Spaces

    Get PDF
    abstract: Deep learning architectures have been widely explored in computer vision and have depicted commendable performance in a variety of applications. A fundamental challenge in training deep networks is the requirement of large amounts of labeled training data. While gathering large quantities of unlabeled data is cheap and easy, annotating the data is an expensive process in terms of time, labor and human expertise. Thus, developing algorithms that minimize the human effort in training deep models is of immense practical importance. Active learning algorithms automatically identify salient and exemplar samples from large amounts of unlabeled data and can augment maximal information to supervised learning models, thereby reducing the human annotation effort in training machine learning models. The goal of this dissertation is to fuse ideas from deep learning and active learning and design novel deep active learning algorithms. The proposed learning methodologies explore diverse label spaces to solve different computer vision applications. Three major contributions have emerged from this work; (i) a deep active framework for multi-class image classication, (ii) a deep active model with and without label correlation for multi-label image classi- cation and (iii) a deep active paradigm for regression. Extensive empirical studies on a variety of multi-class, multi-label and regression vision datasets corroborate the potential of the proposed methods for real-world applications. Additional contributions include: (i) a multimodal emotion database consisting of recordings of facial expressions, body gestures, vocal expressions and physiological signals of actors enacting various emotions, (ii) four multimodal deep belief network models and (iii) an in-depth analysis of the effect of transfer of multimodal emotion features between source and target networks on classification accuracy and training time. These related contributions help comprehend the challenges involved in training deep learning models and motivate the main goal of this dissertation.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    corecore