6 research outputs found

    Assortative Mating in Genetic Algorithms for Dynamic Problems

    Full text link

    ์œ ์ „ ์•Œ๊ณ ๋ฆฌ์ฆ˜์—์„œ์˜ ์ ์‘์  ์ง์ง“๊ธฐ ์ œ๋„

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ)-- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2017. 2. ๋ฌธ๋ณ‘๋กœ.์ง์ง“๊ธฐ ์ œ๋„๋Š” ์ž์‹ ํ•ด๋ฅผ ๋งŒ๋“ค๊ธฐ ์œ„ํ•˜์—ฌ ๋‘ ๋ถ€๋ชจ๋ฅผ ์„ ํƒํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ๋งํ•œ๋‹ค. ์ด๋Š” ์œ ์ „ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๋™์ž‘ ์ „๋ฐ˜์— ์˜ํ–ฅ์„ ๋ผ์นœ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ ๋Š”,ํ—๊ฐ€๋ฆฌ์•ˆ๋ฐฉ๋ฒ•์„์‚ฌ์šฉํ•œ์ง์ง“๊ธฐ์ œ๋„์—๋Œ€ํ•ด์—ฐ๊ตฌํ•˜์˜€๋‹ค.๊ทธ์ œ๋„๋“ค์€ ๋Œ€์‘๋˜๋Š” ๊ฑฐ๋ฆฌ์˜ ํ•ฉ์„ ์ตœ์†Œํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ•, ์ตœ๋Œ€ํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ•, ๊ทธ๋ฆฌ๊ณ  ๋น„๊ต๋ฅผ ์œ„ํ•ด ๋žœ๋คํ•˜๊ฒŒ ๋Œ€์‘์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•๋“ค์„ ๊ฐ€๋ฆฌํ‚จ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ด ์ œ๋„๋“ค ์„์ž˜์•Œ๋ ค์ง„๋ฌธ์ œ์ธ์ˆœํšŒํŒ๋งค์›๋ฌธ์ œ์™€๊ทธ๋ž˜ํ”„๋ถ„ํ• ๋ฌธ์ œ์—์ ์šฉํ•˜์˜€๋‹ค. ๋˜ํ•œ ์„ธ๋Œ€๋ณ„๋กœ ๊ฐ€์žฅ ์ข‹์€ ํ•ด๊ฐ€ ์–ด๋–ป๊ฒŒ ๋ณ€ํ™”ํ•˜๋Š”์ง€ ๋ถ„์„ํ•˜์˜€๋‹ค. ์ด๋Ÿฌํ•œ ๋ถ„ ์„์— ๊ธฐ์ดˆํ•˜์—ฌ, ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๊ฐ„๋‹จํžˆ ๊ฒฐํ•ฉ๋œ ์ง์ง“๊ธฐ ์ œ๋„๋ฅผ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ œ์•ˆ๋œ ์ œ๋„๋Š” ๊ฒฐํ•ฉ๋˜์ง€ ์•Š์€ ์ œ๋„์— ๋น„ํ•ด ๋” ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์˜€๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ๋˜ํ•œ, ๋ณธ ๋…ผ๋ฌธ์˜ ํ•ต์‹ฌ ๋ฐฉ๋ฒ•์ธ ์ง์ง“๊ธฐ ์ œ๋„๋ฅผ ๊ฒฐํ•ฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์•ˆํ•œ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์˜ ์ ์‘์ ์ธ ์ง์ง“๊ธฐ ๋ฐฉ๋ฒ•์€ ์„ธ ํ—๊ฐ€๋ฆฌ์•ˆ ์ œ๋„ ์ค‘ํ•˜๋‚˜๋ฅผ์„ ํƒํ•œ๋‹ค.๋ชจ๋“ ์ง์ง€์–ด์ง„์Œ์€๋‹ค์Œ์„ธ๋Œ€๋ฅผ์œ„ํ•œ์ง์ง“๊ธฐ๋ฐฉ๋ฒ•์„ ๊ฒฐ์ •ํ•  ํˆฌํ‘œ๊ถŒ์„ ๊ฐ–๊ฒŒ ๋œ๋‹ค. ๊ฐ๊ฐ์˜ ์„ ํ˜ธ๋„๋Š” ๋ถ€๋ชจํ•ด๊ฐ„ ๊ฑฐ๋ฆฌ์™€ ๋ถ€๋ชจํ•ด์™€ ์ž์‹ํ•ด์˜ ๊ฑฐ๋ฆฌ์˜ ๋น„์œจ์„ ํ†ตํ•ด ๊ฒฐ์ •๋œ๋‹ค. ์ œ์•ˆ๋œ ์ ์‘์  ๋ฐฉ๋ฒ•์€ ๋ชจ๋“  ๋‹จ์ผ ํ—๊ฐ€๋ฆฌ์•ˆ์ง์ง“๊ธฐ์ œ๋„,๋น„์ ์‘์ ์œผ๋กœ๊ฒฐํ•ฉ๋œ๋ฐฉ๋ฒ•,์ „ํ†ต์ ์ธ๋ฃฐ๋ ›ํœ ์„ ํƒ, ๊ธฐ์กด์˜๋‹ค๋ฅธ๊ฑฐ๋ฆฌ๊ธฐ์ค€๋ฐฉ๋ฒ•๋“ค๋ณด๋‹ค์ข‹์€๊ฒฐ๊ณผ๋ฅผ๋ณด์˜€๋‹ค.์ œ์•ˆ๋œ์ ์‘์ ๋ฐฉ ๋ฒ•์€์ •๊ธฐ์ ์ธํ•ด์ง‘๋‹จ์˜์œ ์ž…๊ณผ์ง€์—ญ์ตœ์ ํ™”์™€๊ฒฐํ•ฉ๋œํ™˜๊ฒฝ์—์„œ๋„์ ์ ˆํ•œ ์ œ๋„๋ฅผ ์„ ํƒํ–ˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ํ—๊ฐ€๋ฆฌ์•ˆ ๋ฐฉ๋ฒ•์„ ์ตœ๋Œ€ ํ˜น์€ ์ตœ์†Œ์˜ ์ง€์—ญ ์ตœ์ ์ ์„์ฐพ๋Š”๋ฐฉ๋ฒ•์œผ๋กœ๊ต์ฒดํ–ˆ๋‹ค.์ด๋ฐฉ์‹์—ญ์‹œ์ง€์—ญ์ตœ์ ์ ์„์ฐพ๋Š”๋‹จ์ผ ๋ฐฉ๋ฒ•๋“ค๋ณด๋‹ค ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์˜€๋‹คI. Introduction 1 1.1 Motivation 1 1.2 Related Work 2 1.3 Contribution 4 1.4 Organization 6 II. Preliminary 7 2.1 Hungarian Method 7 2.2 Geometric Operators 10 2.2.1 Formal Definitions 10 2.3 Exploration Versus Exploitation Trade-off 11 2.4 Test Problems and Distance Metric 13 III. Hungarian Mating Scheme 15 3.1 Proposed Scheme 15 3.2 Tested GA 18 3.3 Observation 18 3.3.1 Traveling Salesman Problem 18 3.3.2 Graph Bisection Problem 21 IV. Hybrid and Adaptive Scheme 28 4.1 Simple Hybrid Scheme 28 4.2 Adaptive Scheme 30 4.2.1 Significance of Adaptive Scheme 30 4.2.2 Proposed Method 31 4.2.3 Theoretical Support 34 4.2.4 Experiments 36 4.2.5 Traveling Salesman Problem 36 4.2.6 Graph Bisection Problem 40 4.2.7 Comparison with Traditional Method 41 4.2.8 Comparison with Distance-based Methods 42 V. Tests in Various Environments 50 5.1 Hybrid GA 50 5.1.1 Experiment Settings 50 5.1.2 Results and Discussions 51 5.2 GA with New Individuals 52 5.2.1 Experiment Settings 52 5.2.2 Results and Discussions 53 VI. A Revised Version of Adaptive Method 62 6.1 Hungarian Mating Scheme 62 6.2 Experiment Settings 62 6.3 Results and Discussions 63 VII. Conclusion 67 7.1 Summary 67 7.2 Future Work 68Docto

    Enhanced Deep Network Designs Using Mitochondrial DNA Based Genetic Algorithm And Importance Sampling

    Get PDF
    Machine learning (ML) is playing an increasingly important role in our lives. It has already made huge impact in areas such as cancer diagnosis, precision medicine, self-driving cars, natural disasters predictions, speech recognition, etc. The painstakingly handcrafted feature extractors used in the traditional learning, classification and pattern recognition systems are not scalable for large-sized datasets or adaptable to different classes of problems or domains. Machine learning resurgence in the form of Deep Learning (DL) in the last decade after multiple AI (artificial intelligence) winters and hype cycles is a result of the convergence of advancements in training algorithms, availability of massive data (big data) and innovation in compute resources (GPUs and cloud). If we want to solve more complex problems with machine learning, we need to optimize all three of these areas, i.e., algorithms, dataset and compute. Our dissertation research work presents the original application of nature-inspired idea of mitochondrial DNA (mtDNA) to improve deep learning network design. Additional fine-tuning is provided with Monte Carlo based method called importance sampling (IS). The primary performance indicators for machine learning are model accuracy, loss and training time. The goal of our dissertation is to provide a framework to address all these areas by optimizing network designs (in the form of hyperparameter optimization) and dataset using enhanced Genetic Algorithm (GA) and importance sampling. Algorithms are by far the most important aspect of machine learning. We demonstrate the application of mitochondrial DNA to complement the standard genetic algorithm for architecture optimization of deep Convolution Neural Network (CNN). We use importance sampling to reduce the dataset variance and sample more often from the instances that add greater value from the training outcome perspective. And finally, we leverage massive parallel and distributed processing of GPUs in the cloud to speed up training. Thus, our multi-approach method for enhancing deep learning combines architecture optimization, dataset optimization and the power of the cloud to drive better model accuracy and reduce training time

    Advances in Evolutionary Algorithms

    Get PDF
    With the recent trends towards massive data sets and significant computational power, combined with evolutionary algorithmic advances evolutionary computation is becoming much more relevant to practice. Aim of the book is to present recent improvements, innovative ideas and concepts in a part of a huge EA field
    corecore